DPDK patches and discussions
 help / color / mirror / Atom feed
Search results ordered by [date|relevance]  view[summary|nested|Atom feed]
thread overview below | download: 
* Re: [dpdk-dev] [RFC PATCH v4 0/3] Add PIE support for HQoS library
  @ 2021-07-16 12:46  0%   ` Dumitrescu, Cristian
  0 siblings, 0 replies; 200+ results
From: Dumitrescu, Cristian @ 2021-07-16 12:46 UTC (permalink / raw)
  To: Liguzinski, WojciechX, dev, Singh, Jasvinder
  Cc: Dharmappa, Savinay, Ajmera, Megha

Hi Wojciech,

Thank you for doing this work!

> -----Original Message-----
> From: Liguzinski, WojciechX <wojciechx.liguzinski@intel.com>
> Sent: Monday, July 5, 2021 9:04 AM
> To: dev@dpdk.org; Singh, Jasvinder <jasvinder.singh@intel.com>;
> Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: Dharmappa, Savinay <savinay.dharmappa@intel.com>; Ajmera, Megha
> <megha.ajmera@intel.com>
> Subject: [RFC PATCH v4 0/3] Add PIE support for HQoS library
> 
> DPDK sched library is equipped with mechanism that secures it from the
> bufferbloat problem
> which is a situation when excess buffers in the network cause high latency
> and latency
> variation. Currently, it supports RED for active queue management (which is
> designed
> to control the queue length but it does not control latency directly and is now
> being
> obsoleted). However, more advanced queue management is required to
> address this problem
> and provide desirable quality of service to users.

As already mentioned by other reviewers, I don't think RED/WRED is getting obsoleted. This entire paragraph is a bit fuzzy and not really adding much value IMO, I propose to remove it.

> 
> This solution (RFC) proposes usage of new algorithm called "PIE"
> (Proportional Integral
> controller Enhanced) that can effectively and directly control queuing latency
> to address
> the bufferbloat problem.

Please add a link to the public RFC for PIE in this cover letter.

> 
> The implementation of mentioned functionality includes modification of
> existing and
> adding a new set of data structures to the library, adding PIE related APIs.
> This affects structures in public API/ABI. That is why deprecation notice is
> going
> to be prepared and sent.

I think you are stating the obvious here, how about removing this paragraph as well?

> 
> Liguzinski, WojciechX (3):
>   sched: add PIE based congestion management
>   example/qos_sched: add PIE support
>   example/ip_pipeline: add PIE support
> 
>  config/rte_config.h                      |   1 -
>  drivers/net/softnic/rte_eth_softnic_tm.c |   6 +-
>  examples/ip_pipeline/tmgr.c              |   6 +-
>  examples/qos_sched/app_thread.c          |   1 -
>  examples/qos_sched/cfg_file.c            |  82 ++++-
>  examples/qos_sched/init.c                |   7 +-
>  examples/qos_sched/profile.cfg           | 196 +++++++----
>  lib/sched/meson.build                    |  10 +-
>  lib/sched/rte_pie.c                      |  82 +++++
>  lib/sched/rte_pie.h                      | 393 +++++++++++++++++++++++
>  lib/sched/rte_sched.c                    | 229 +++++++++----
>  lib/sched/rte_sched.h                    |  53 ++-
>  lib/sched/version.map                    |   3 +
>  13 files changed, 888 insertions(+), 181 deletions(-)
>  create mode 100644 lib/sched/rte_pie.c
>  create mode 100644 lib/sched/rte_pie.h
> 
> --
> 2.17.1

Regards,
Cristian

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] Minutes of Technical Board Meeting 2021-06-02
@ 2021-07-16 14:51  5% Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-07-16 14:51 UTC (permalink / raw)
  To: dev

Minutes of Technical Board Meeting, 2021-06-02
==============================================


NOTE: The technical board meetings every second Wednesday at
https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.

NOTE: Next meeting will be on Wednesday 2021-06-09 @3pm UTC, and will be
chaired by Thomas.

1/ CI infrastructure
--------------------
The current CI infrastructure is failing. The root cause appears to be
the upgrade by UNH IOL causing test failures. The test failures impact the
patch approval process since patches marked as failing CI are normally not
allowed.

Proposal was to have another set of resources to test upgrade before
deploying.


2/ ABI stability period
-----------------------

When initially discussed the stability period was going to be two
years, but in final compromise a trial period of one year was agreed
to but the wording in documentation allows for longer periods.
In the documentation (guides/contributing/abi_policy.rst)
 "Major ABI versions are declared no more frequently than yearly"

The proposal is to go to two year period but there are some open
concerns that need addressing:
  - several data structures and inline functions need to be hidden
    to reduce the exposed ABI.
  - many experimental features need to be moved to stable status.
  - deprecated functions and fields need to be removed.
If 21.11 is going to have two year ABI window, then cleanups are
needed.

Related discussions:

Should the scope of Long Term Stable (LTS) be expended? Right now,
the scope is limited to bug fixes. Vendors and distro's using LTS
would appreciate having new drivers (and PCI ids).
What about backporting standalone new libraries to LTS?
Conclusion: is that more discussion about requirements and risks
are needed before expanding LTS.

Indirect results of the current ABI policy has benefits. The ABI clamp
has acted to reduce wild/unstable changes and causes better designs.
Downside is that there is less of a trial window for changes, if a new
feature requires ABI change it goes into the yearly release without getting
longer period of review and testing.

What kind of upcoming features need ABI breakage?

Conclusion:
Taskforce will be setup to make a more concrete recommendation.
The taskforce will give status update in 2 weeks (next TAB)
and recommend action for 21.11 in one month.

^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH v1] doc: update atomic operation deprecation
  @ 2021-07-17 18:47  0% ` Honnappa Nagarahalli
  2021-07-23  9:49  4% ` [dpdk-dev] [PATCH v2] " Joyce Kong
  1 sibling, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2021-07-17 18:47 UTC (permalink / raw)
  To: Joyce Kong, thomas, stephen, Ruifeng Wang, mdr
  Cc: dev, nd, stable, Honnappa Nagarahalli, nd

<snip>

> 
> Update the incorrect description about atomic operations with provided
> wrappers in deprecation doc[1].
> 
> [1]https://mails.dpdk.org/archives/dev/2021-July/213333.html
> 
> Fixes: 7518c5c4ae6a ("doc: announce adoption of C11 atomic operations
> semantics")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Joyce Kong <joyce.kong@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 16 ++++++++--------
>  1 file changed, 8 insertions(+), 8 deletions(-)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 9584d6bfd7..4142315842 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -19,16 +19,16 @@ Deprecation Notices
> 
>  * rte_atomicNN_xxx: These APIs do not take memory order parameter. This
> does
>    not allow for writing optimized code for all the CPU architectures supported
> -  in DPDK. DPDK will adopt C11 atomic operations semantics and provide
> wrappers
> -  using C11 atomic built-ins. These wrappers must be used for patches that
> -  need to be merged in 20.08 onwards. This change will not introduce any
> -  performance degradation.
> +  in DPDK. DPDK has adopted atomic operations semantics. GCC atomic
> + built-ins  must be used for patches that need to be merged in 20.08
> + onwards. This change  will not introduce any performance degradation.
Since there have been objections to the language used to refer to GCC C11 atomic built-ins, may be we add a reference to the GCC pages?

DPDK has adopted the atomic operations from https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html. These operations must be used for patches that need to be merged in 20.08 onwards. This change  will not introduce any performance degradation.

> 
>  * rte_smp_*mb: These APIs provide full barrier functionality. However, many
> -  use cases do not require full barriers. To support such use cases, DPDK will
> -  adopt C11 barrier semantics and provide wrappers using C11 atomic built-
> ins.
> -  These wrappers must be used for patches that need to be merged in 20.08
> -  onwards. This change will not introduce any performance degradation.
> +  use cases do not require full barriers. To support such use cases,
> + DPDK has  adopted atomic barrier semantics. GCC atomic built-ins and a
> + new wrapper  ``rte_atomic_thread_fence`` instead of
> + ``__atomic_thread_fence`` must be  used for patches that need to be
> + merged in 20.08 onwards. This change will  not introduce any performance
> degradation.
Same here.
To support such use cases, DPDK has  adopted atomic operations from https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html. A new wrapper  ``rte_atomic_thread_fence`` instead of ``__atomic_thread_fence`` must be  used for patches that need to be merged in 20.08 onwards. This change will  not introduce any performance degradation.

> 
>  * lib: will fix extending some enum/define breaking the ABI. There are
> multiple
>    samples in DPDK that enum/define terminated with a ``.*MAX.*`` value
> which is
> --
> 2.17.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6] dmadev: introduce DMA device library
  @ 2021-07-19  6:21  3%   ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2021-07-19  6:21 UTC (permalink / raw)
  To: Chengwen Feng
  Cc: Thomas Monjalon, Ferruh Yigit, Richardson, Bruce, Jerin Jacob,
	Andrew Rybchenko, dpdk-dev, Morten Brørup, Nipun Gupta,
	Hemant Agrawal, Maxime Coquelin, Honnappa Nagarahalli,
	David Marchand, Satananda Burla, Prasun Kapoor, Ananyev,
	Konstantin

On Mon, Jul 19, 2021 at 9:02 AM Chengwen Feng <fengchengwen@huawei.com> wrote:
>
> This patch introduce 'dmadevice' which is a generic type of DMA
> device.
>
> The APIs of dmadev library exposes some generic operations which can
> enable configuration and I/O with the DMA devices.
>
> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>


The API specification aspects look pretty good to me.

Some minor comments are below. You can add my Acked by on future version
API header file where you will split the patch.


> diff --git a/lib/dmadev/rte_dmadev.h b/lib/dmadev/rte_dmadev.h
> new file mode 100644
> index 0000000..ecac281
> --- /dev/null
> +++ b/lib/dmadev/rte_dmadev.h
> @@ -0,0 +1,1025 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) 2021 HiSilicon Limited.
> + * Copyright(c) 2021 Intel Corporation.
> + * Copyright(c) 2021 Marvell International Ltd.
> + * Copyright(c) 2021 SmartShare Systems.
> + */
> +
> +#ifndef _RTE_DMADEV_H_
> +#define _RTE_DMADEV_H_
> +
> +/**
> + * @file rte_dmadev.h
> + *
> + * RTE DMA (Direct Memory Access) device APIs.
> + *
> + * The DMA framework is built on the following model:
> + *
> + *     ---------------   ---------------       ---------------
> + *     | virtual DMA |   | virtual DMA |       | virtual DMA |
> + *     | channel     |   | channel     |       | channel     |
> + *     ---------------   ---------------       ---------------
> + *            |                |                      |
> + *            ------------------                      |
> + *                     |                              |
> + *               ------------                    ------------
> + *               |  dmadev  |                    |  dmadev  |
> + *               ------------                    ------------
> + *                     |                              |
> + *            ------------------               ------------------
> + *            | HW-DMA-channel |               | HW-DMA-channel |
> + *            ------------------               ------------------
> + *                     |                              |
> + *                     --------------------------------
> + *                                     |
> + *                           ---------------------
> + *                           | HW-DMA-Controller |
> + *                           ---------------------
> + *
> + * The DMA controller could have multiple HW-DMA-channels (aka. HW-DMA-queues),
> + * each HW-DMA-channel should be represented by a dmadev.
> + *
> + * The dmadev could create multiple virtual DMA channels, each virtual DMA
> + * channel represents a different transfer context. The DMA operation request
> + * must be submitted to the virtual DMA channel. e.g. Application could create
> + * virtual DMA channel 0 for memory-to-memory transfer scenario, and create
> + * virtual DMA channel 1 for memory-to-device transfer scenario.
> + *
> + * The dmadev are dynamically allocated by rte_dmadev_pmd_allocate() during the
> + * PCI/SoC device probing phase performed at EAL initialization time. And could
> + * be released by rte_dmadev_pmd_release() during the PCI/SoC device removing
> + * phase.
> + *
> + * This framework uses 'uint16_t dev_id' as the device identifier of a dmadev,
> + * and 'uint16_t vchan' as the virtual DMA channel identifier in one dmadev.
> + *
> + * The functions exported by the dmadev API to setup a device designated by its
> + * device identifier must be invoked in the following order:
> + *     - rte_dmadev_configure()
> + *     - rte_dmadev_vchan_setup()
> + *     - rte_dmadev_start()
> + *
> + * Then, the application can invoke dataplane APIs to process jobs.
> + *
> + * If the application wants to change the configuration (i.e. invoke
> + * rte_dmadev_configure() or rte_dmadev_vchan_setup()), it must invoke
> + * rte_dmadev_stop() first to stop the device and then do the reconfiguration
> + * before invoking rte_dmadev_start() again. The dataplane APIs should not be
> + * invoked when the device is stopped.
> + *
> + * Finally, an application can close a dmadev by invoking the
> + * rte_dmadev_close() function.
> + *
> + * The dataplane APIs include two parts:
> + * The first part is the submission of operation requests:
> + *     - rte_dmadev_copy()
> + *     - rte_dmadev_copy_sg()
> + *     - rte_dmadev_fill()
> + *     - rte_dmadev_submit()
> + *
> + * These APIs could work with different virtual DMA channels which have
> + * different contexts.
> + *
> + * The first three APIs are used to submit the operation request to the virtual
> + * DMA channel, if the submission is successful, a uint16_t ring_idx is
> + * returned, otherwise a negative number is returned.
> + *
> + * The last API was used to issue doorbell to hardware, and also there are flags
> + * (@see RTE_DMA_OP_FLAG_SUBMIT) parameter of the first three APIs could do the
> + * same work.
> + *
> + * The second part is to obtain the result of requests:
> + *     - rte_dmadev_completed()
> + *         - return the number of operation requests completed successfully.
> + *     - rte_dmadev_completed_status()
> + *         - return the number of operation requests completed.
> + *
> + * @note If the dmadev works in silent mode, application does not invoke the

in slient mode (@see RTE_DMA_DEV_CAPA_SILENT)

> + * above two completed APIs.
> + *
> + * About the ring_idx which enqueue APIs (e.g. rte_dmadev_copy()
> + * rte_dmadev_fill()) returned, the rules are as follows:
> + *     - ring_idx for each virtual DMA channel are independent.
> + *     - For a virtual DMA channel, the ring_idx is monotonically incremented,
> + *       when it reach UINT16_MAX, it wraps back to zero.
> + *     - This ring_idx can be used by applications to track per-operation
> + *       metadata in an application-defined circular ring.
> + *     - The initial ring_idx of a virtual DMA channel is zero, after the
> + *       device is stopped, the ring_idx needs to be reset to zero.
> + *
> + * One example:
> + *     - step-1: start one dmadev
> + *     - step-2: enqueue a copy operation, the ring_idx return is 0
> + *     - step-3: enqueue a copy operation again, the ring_idx return is 1
> + *     - ...
> + *     - step-101: stop the dmadev
> + *     - step-102: start the dmadev
> + *     - step-103: enqueue a copy operation, the cookie return is 0
> + *     - ...
> + *     - step-x+0: enqueue a fill operation, the ring_idx return is 65535
> + *     - step-x+1: enqueue a copy operation, the ring_idx return is 0
> + *     - ...
> + *
> + * The DMA operation address used in enqueue APIs (i.e. rte_dmadev_copy(),
> + * rte_dmadev_copy_sg(), rte_dmadev_fill()) defined as rte_iova_t type. The
> + * dmadev supports two types of address: memory address and device address.
> + *
> + * - memory address: the source and destination address of the memory-to-memory
> + * transfer type, or the source address of the memory-to-device transfer type,
> + * or the destination address of the device-to-memory transfer type.
> + * @note If the device support SVA, the memory address can be any VA address,

If the device supports SVA (@see RTE_DMA_DEV_CAPA_SVA)

> + * otherwise it must be an IOVA address.
> + *
> + * - device address: the source and destination address of the device-to-device
> + * transfer type, or the source address of the device-to-memory transfer type,
> + * or the destination address of the memory-to-device transfer type.
> + *
> + * By default, all the functions of the dmadev API exported by a PMD are
> + * lock-free functions which assume to not be invoked in parallel on different
> + * logical cores to work on the same target dmadev object.
> + * @note Different virtual DMA channels on the same dmadev *DO NOT* support
> + * parallel invocation because there virtual DMA channels share the same

their?

> + * HW-DMA-channel.
> + *
> + */
> +
> +#include <rte_common.h>
> +#include <rte_compat.h>
> +#include <rte_dev.h>
> +#include <rte_errno.h>
> +#include <rte_memory.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +#define RTE_DMADEV_NAME_MAX_LEN        RTE_DEV_NAME_MAX_LEN
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * @param dev_id
> + *   DMA device index.
> + *
> + * @return
> + *   - If the device index is valid (true) or not (false).
> + */
> +__rte_experimental
> +bool
> +rte_dmadev_is_valid_dev(uint16_t dev_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Get the total number of DMA devices that have been successfully
> + * initialised.
> + *
> + * @return
> + *   The total number of usable DMA devices.
> + */
> +__rte_experimental
> +uint16_t
> +rte_dmadev_count(void);
> +
> +/* Enumerates DMA device capabilities. */
> +#define RTE_DMA_DEV_CAPA_MEM_TO_MEM    (1ull << 0)
> +/**< DMA device support memory-to-memory transfer.
> + *
> + * @see struct rte_dmadev_info::dev_capa
> + */
> +
> +#define RTE_DMA_DEV_CAPA_MEM_TO_DEV    (1ull << 1)
> +/**< DMA device support memory-to-device transfer.
> + *
> + * @see struct rte_dmadev_info::dev_capa
> + * @see struct rte_dmadev_port_param::port_type
> + */
> +
> +#define RTE_DMA_DEV_CAPA_DEV_TO_MEM    (1ull << 2)
> +/**< DMA device support device-to-memory transfer.
> + *
> + * @see struct rte_dmadev_info::dev_capa
> + * @see struct rte_dmadev_port_param::port_type
> + */
> +
> +#define RTE_DMA_DEV_CAPA_DEV_TO_DEV    (1ull << 3)
> +/**< DMA device support device-to-device transfer.
> + *
> + * @see struct rte_dmadev_info::dev_capa
> + * @see struct rte_dmadev_port_param::port_type
> + */
> +
> +#define RTE_DMA_DEV_CAPA_SVA           (1ull << 4)
> +/**< DMA device support SVA which could use VA as DMA address.
> + * If device support SVA then application could pass any VA address like memory
> + * from rte_malloc(), rte_memzone(), malloc, stack memory.
> + * If device don't support SVA, then application should pass IOVA address which
> + * from rte_malloc(), rte_memzone().
> + *
> + * @see struct rte_dmadev_info::dev_capa
> + */
> +
> +#define RTE_DMA_DEV_CAPA_SILENT                (1ull << 5)
> +/**< DMA device support work in silent mode.
> + * In this mode, application don't required to invoke rte_dmadev_completed*()
> + * API.
> + *
> + * @see struct rte_dmadev_conf::silent_mode
> + */
> +
> +#define RTE_DMA_DEV_CAPA_OPS_COPY      (1ull << 32)
> +/**< DMA device support copy ops.
> + * This capability start with index of 32, so that it could leave gap between
> + * normal capability and ops capability.
> + *
> + * @see struct rte_dmadev_info::dev_capa
> + */
> +
> +#define RTE_DMA_DEV_CAPA_OPS_COPY_SG   (1ull << 33)
> +/**< DMA device support scatter-list copy ops.
> + *
> + * @see struct rte_dmadev_info::dev_capa
> + */
> +
> +#define RTE_DMA_DEV_CAPA_OPS_FILL      (1ull << 34)
> +/**< DMA device support fill ops.
> + *
> + * @see struct rte_dmadev_info::dev_capa
> + */
> +
> +/**
> + * A structure used to retrieve the information of a DMA device.
> + */
> +struct rte_dmadev_info {
> +       struct rte_device *device; /**< Generic Device information. */
> +       uint64_t dev_capa; /**< Device capabilities (RTE_DMA_DEV_CAPA_*). */
> +       uint16_t max_vchans;
> +       /**< Maximum number of virtual DMA channels supported. */
> +       uint16_t max_desc;
> +       /**< Maximum allowed number of virtual DMA channel descriptors. */
> +       uint16_t min_desc;
> +       /**< Minimum allowed number of virtual DMA channel descriptors. */
> +       uint16_t nb_vchans; /**< Number of virtual DMA channel configured. */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Retrieve information of a DMA device.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param[out] dev_info
> + *   A pointer to a structure of type *rte_dmadev_info* to be filled with the
> + *   information of the device.
> + *
> + * @return
> + *   - =0: Success, driver updates the information of the DMA device.
> + *   - <0: Error code returned by the driver info get function.
> + *
> + */
> +__rte_experimental
> +int
> +rte_dmadev_info_get(uint16_t dev_id, struct rte_dmadev_info *dev_info);
> +
> +/**
> + * A structure used to configure a DMA device.
> + */
> +struct rte_dmadev_conf {
> +       uint16_t max_vchans;
> +       /**< Maximum number of virtual DMA channel to use.
> +        * This value cannot be greater than the field 'max_vchans' of struct
> +        * rte_dmadev_info which get from rte_dmadev_info_get().
> +        */
> +       uint8_t silent_mode;

bool instead of uint8_t?

> +       /**< Indicates whether to work in silent mode.
> +        * 0-default mode, 1-silent mode.
> +        *
> +        * @see RTE_DMA_DEV_CAPA_SILENT
> +        */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Configure a DMA device.
> + *
> + * This function must be invoked first before any other function in the
> + * API. This function can also be re-invoked when a device is in the
> + * stopped state.
> + *
> + * @param dev_id
> + *   The identifier of the device to configure.
> + * @param dev_conf
> + *   The DMA device configuration structure encapsulated into rte_dmadev_conf
> + *   object.
> + *
> + * @return
> + *   - =0: Success, device configured.
> + *   - <0: Error code returned by the driver configuration function.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_configure(uint16_t dev_id, const struct rte_dmadev_conf *dev_conf);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Start a DMA device.
> + *
> + * The device start step is the last one and consists of setting the DMA
> + * to start accepting jobs.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + *
> + * @return
> + *   - =0: Success, device started.
> + *   - <0: Error code returned by the driver start function.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_start(uint16_t dev_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Stop a DMA device.
> + *
> + * The device can be restarted with a call to rte_dmadev_start().
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + *
> + * @return
> + *   - =0: Success, device stopped.
> + *   - <0: Error code returned by the driver stop function.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_stop(uint16_t dev_id);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Close a DMA device.
> + *
> + * The device cannot be restarted after this call.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + *
> + * @return
> + *  - =0: Successfully close device
> + *  - <0: Failure to close device
> + */
> +__rte_experimental
> +int
> +rte_dmadev_close(uint16_t dev_id);
> +
> +/**
> + * rte_dma_direction - DMA transfer direction defines.
> + */
> +enum rte_dma_direction {
> +       RTE_DMA_DIR_MEM_TO_MEM = 0,

No need to give = 0 as it starts with 0.

> +       /**< DMA transfer direction - from memory to memory.
> +        *
> +        * @see struct rte_dmadev_vchan_conf::direction
> +        */
> +       RTE_DMA_DIR_MEM_TO_DEV = 1,

No need to give = 1.

> +       /**< DMA transfer direction - from memory to device.
> +        * In a typical scenario, ARM SoCs are installed on x86 servers as iNICs

We can remove ARM. It can be RISC-V too. ;-)


> +        * through the PCIE interface. In this case, the ARM SoCs works in

PCIe

> +        * EP(endpoint) mode, it could initiate a DMA move request from memory
> +        * (which is ARM memory) to device (which is x86 host memory).

to the device.

> +        *
> +        * @see struct rte_dmadev_vchan_conf::direction
> +        */
> +       RTE_DMA_DIR_DEV_TO_MEM = 2,
> +       /**< DMA transfer direction - from device to memory.
> +        * In a typical scenario, ARM SoCs are installed on x86 servers as iNICs
> +        * through the PCIE interface. In this case, the ARM SoCs works in
> +        * EP(endpoint) mode, it could initiate a DMA move request from device
> +        * (which is x86 host memory) to memory (which is ARM memory).
> +        *
> +        * @see struct rte_dmadev_vchan_conf::direction
> +        */
> +       RTE_DMA_DIR_DEV_TO_DEV = 3,
> +       /**< DMA transfer direction - from device to device.
> +        * In a typical scenario, ARM SoCs are installed on x86 servers as iNICs
> +        * through the PCIE interface. In this case, the ARM SoCs works in
> +        * EP(endpoint) mode, it could initiate a DMA move request from device
> +        * (which is x86 host memory) to device (which is another x86 host
> +        * memory).
> +        *
> +        * @see struct rte_dmadev_vchan_conf::direction
> +        */
> +       RTE_DMA_DIR_BUTT

# Doxygen comment is missing
# Typically we use RTE_DMA_DIR_MAX.
# If there is no real need for this please remove this as it can break
ABI if we add more
items.


> +};
> +
> +/**
> + * enum rte_dmadev_port_type - DMA access port type defines.
> + *
> + * @see struct rte_dmadev_port_param::port_type
> + */
> +enum rte_dmadev_port_type {
> +       RTE_DMADEV_PORT_NONE = 0,

No need for = 0

> +       RTE_DMADEV_PORT_PCIE, /**< The DMA access port is PCIE. */
> +       RTE_DMADEV_PORT_BUTT


Same as the above comment for RTE_DMA_DIR_BUTT

> +};
> +
> +/**
> + * A structure used to descript DMA access port parameters.
> + *
> + * @see struct rte_dmadev_vchan_conf::src_port
> + * @see struct rte_dmadev_vchan_conf::dst_port
> + */
> +struct rte_dmadev_port_param {
> +       enum rte_dmadev_port_type port_type;
> +       /**< The device access port type.
> +        * @see enum rte_dmadev_port_type
> +        */
> +       union {
> +               /** PCIE access port parameter.
> +                *
> +                * The following model shows SoC's PCIE module connects to
> +                * multiple PCIE hosts and multiple endpoints. The PCIE module
> +                * has an integrate DMA controller.
> +                * If the DMA wants to access the memory of host A, it can be
> +                * initiated by PF1 in core0, or by VF0 of PF0 in core0.
> +                *
> +                * System Bus
> +                *    |     ----------PCIE module----------
> +                *    |     Bus
> +                *    |     Interface
> +                *    |     -----        ------------------
> +                *    |     |   |        | PCIE Core0     |
> +                *    |     |   |        |                |        -----------
> +                *    |     |   |        |   PF-0 -- VF-0 |        | Host A  |
> +                *    |     |   |--------|        |- VF-1 |--------| Root    |
> +                *    |     |   |        |   PF-1         |        | Complex |
> +                *    |     |   |        |   PF-2         |        -----------
> +                *    |     |   |        ------------------
> +                *    |     |   |
> +                *    |     |   |        ------------------
> +                *    |     |   |        | PCIE Core1     |
> +                *    |     |   |        |                |        -----------
> +                *    |     |   |        |   PF-0 -- VF-0 |        | Host B  |
> +                *    |-----|   |--------|   PF-1 -- VF-0 |--------| Root    |
> +                *    |     |   |        |        |- VF-1 |        | Complex |
> +                *    |     |   |        |   PF-2         |        -----------
> +                *    |     |   |        ------------------
> +                *    |     |   |
> +                *    |     |   |        ------------------
> +                *    |     |DMA|        |                |        ------
> +                *    |     |   |        |                |--------| EP |
> +                *    |     |   |--------| PCIE Core2     |        ------
> +                *    |     |   |        |                |        ------
> +                *    |     |   |        |                |--------| EP |
> +                *    |     |   |        |                |        ------
> +                *    |     -----        ------------------


This diagram does not show correctly in doxygen. Please fix it.

> +                *
> +                * The following structure is used to describe the above access
> +                * port.
> +                *
> +                * @note If some fields can not be supported by the
> +                * hardware/driver, then the driver ignores those fields.
> +                * Please check driver-specific documentation for limitations
> +                * and capablites.
> +                */
> +               struct {
> +                       uint64_t coreid : 4; /**< PCIE core id used. */
> +                       uint64_t pfid : 8; /**< PF id used. */
> +                       uint64_t vfen : 1; /**< VF enable bit. */
> +                       uint64_t vfid : 16; /**< VF id used. */
> +                       uint64_t pasid : 20;
> +                       /**< The pasid filed in TLP packet. */
> +                       uint64_t attr : 3;
> +                       /**< The attributes filed in TLP packet. */
> +                       uint64_t ph : 2;
> +                       /**< The processing hint filed in TLP packet. */
> +                       uint64_t st : 16;
> +                       /**< The steering tag filed in TLP packet. */
> +               } pcie;
> +       };
> +       uint64_t reserved[2]; /**< Reserved for future fields. */
> +};
> +
> +/**
> + * A structure used to configure a virtual DMA channel.
> + */
> +struct rte_dmadev_vchan_conf {
> +       enum rte_dma_direction direction;
> +       /**< Transfer direction
> +        * @see enum rte_dma_direction
> +        */
> +       uint16_t nb_desc;
> +       /**< Number of descriptor for the virtual DMA channel */
> +       struct rte_dmadev_port_param src_port;
> +       /**< 1) Used to describes the device access port parameter in the
> +        * device-to-memory transfer scenario.
> +        * 2) Used to describes the source device access port parameter in the
> +        * device-to-device transfer scenario.
> +        * @see struct rte_dmadev_port_param
> +        */
> +       struct rte_dmadev_port_param dst_port;
> +       /**< 1) Used to describes the device access port parameter in the
> +        * memory-to-device transfer scenario.
> +        * 2) Used to describes the destination device access port parameter in
> +        * the device-to-device transfer scenario.
> +        * @see struct rte_dmadev_port_param
> +        */
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Allocate and set up a virtual DMA channel.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param conf
> + *   The virtual DMA channel configuration structure encapsulated into
> + *   rte_dmadev_vchan_conf object.
> + *
> + * @return
> + *   - >=0: Allocate success, it is the virtual DMA channel id. This value must
> + *          be less than the field 'max_vchans' of struct rte_dmadev_conf
> + *          which configured by rte_dmadev_configure().
> + *   - <0: Error code returned by the driver virtual channel setup function.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_vchan_setup(uint16_t dev_id,
> +                      const struct rte_dmadev_vchan_conf *conf);
> +
> +/**
> + * rte_dmadev_stats - running statistics.
> + */
> +struct rte_dmadev_stats {
> +       uint64_t submitted_count;
> +       /**< Count of operations which were submitted to hardware. */
> +       uint64_t completed_fail_count;
> +       /**< Count of operations which failed to complete. */
> +       uint64_t completed_count;
> +       /**< Count of operations which successfully complete. */
> +};
> +
> +#define RTE_DMADEV_ALL_VCHAN   0xFFFFu
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Retrieve basic statistics of a or all virtual DMA channel(s).
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + *   If equal RTE_DMADEV_ALL_VCHAN means all channels.
> + * @param[out] stats
> + *   The basic statistics structure encapsulated into rte_dmadev_stats
> + *   object.
> + *
> + * @return
> + *   - =0: Successfully retrieve stats.
> + *   - <0: Failure to retrieve stats.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_stats_get(uint16_t dev_id, uint16_t vchan,
> +                    struct rte_dmadev_stats *stats);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Reset basic statistics of a or all virtual DMA channel(s).
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + *   If equal RTE_DMADEV_ALL_VCHAN means all channels.
> + *
> + * @return
> + *   - =0: Successfully reset stats.
> + *   - <0: Failure to reset stats.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_stats_reset(uint16_t dev_id, uint16_t vchan);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Dump DMA device info.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param f
> + *   The file to write the output to.
> + *
> + * @return
> + *   0 on success. Non-zero otherwise.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_dump(uint16_t dev_id, FILE *f);
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Trigger the dmadev self test.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + *
> + * @return
> + *   - 0: Selftest successful.
> + *   - -ENOTSUP if the device doesn't support selftest
> + *   - other values < 0 on failure.
> + */
> +__rte_experimental
> +int
> +rte_dmadev_selftest(uint16_t dev_id);
> +
> +/**
> + * rte_dma_status_code - DMA transfer result status code defines.
> + */
> +enum rte_dma_status_code {
> +       RTE_DMA_STATUS_SUCCESSFUL = 0,

No need for = 0

> +       /**< The operation completed successfully. */
> +       RTE_DMA_STATUS_USRER_ABORT,
> +       /**< The operation failed to complete due abort by user.
> +        * This is mainly used when processing dev_stop, user could modidy the
> +        * descriptors (e.g. change one bit to tell hardware abort this job),
> +        * it allows outstanding requests to be complete as much as possible,
> +        * so reduce the time to stop the device.
> +        */
> +       RTE_DMA_STATUS_NOT_ATTEMPTED,
> +       /**< The operation failed to complete due to following scenarios:
> +        * The jobs in a particular batch are not attempted because they
> +        * appeared after a fence where a previous job failed. In some HW
> +        * implementation it's possible for jobs from later batches would be
> +        * completed, though, so report the status from the not attempted jobs
> +        * before reporting those newer completed jobs.
> +        */
> +       RTE_DMA_STATUS_INVALID_SRC_ADDR,
> +       /**< The operation failed to complete due invalid source address. */
> +       RTE_DMA_STATUS_INVALID_DST_ADDR,
> +       /**< The operation failed to complete due invalid destination
> +        * address.
> +        */
> +       RTE_DMA_STATUS_INVALID_LENGTH,
> +       /**< The operation failed to complete due invalid length. */
> +       RTE_DMA_STATUS_INVALID_OPCODE,
> +       /**< The operation failed to complete due invalid opcode.
> +        * The DMA descriptor could have multiple format, which are
> +        * distinguished by the opcode field.
> +        */
> +       RTE_DMA_STATUS_BUS_ERROR,
> +       /**< The operation failed to complete due bus err. */
> +       RTE_DMA_STATUS_DATA_POISION,
> +       /**< The operation failed to complete due data poison. */
> +       RTE_DMA_STATUS_DESCRIPTOR_READ_ERROR,
> +       /**< The operation failed to complete due descriptor read error. */
> +       RTE_DMA_STATUS_DEV_LINK_ERROR,
> +       /**< The operation failed to complete due device link error.
> +        * Used to indicates that the link error in the memory-to-device/
> +        * device-to-memory/device-to-device transfer scenario.
> +        */
> +       RTE_DMA_STATUS_UNKNOWN = 0x100,
> +       /**< The operation failed to complete due unknown reason.
> +        * The initial value is 256, which reserves space for future errors.
> +        */
> +};
> +
> +/**
> + * rte_dmadev_sge - can hold scatter DMA operation request entry.
> + */
> +struct rte_dmadev_sge {
> +       rte_iova_t addr; /**< The DMA operation address. */
> +       uint32_t length; /**< The DMA operation length. */
> +};
> +
> +#include "rte_dmadev_core.h"
> +
> +/* DMA flags to augment operation preparation. */
> +#define RTE_DMA_OP_FLAG_FENCE  (1ull << 0)
> +/**< DMA fence flag.
> + * It means the operation with this flag must be processed only after all
> + * previous operations are completed.
> + * If the specify DMA HW works in-order (it means it has default fence between
> + * operations), this flag could be NOP.
> + *
> + * @see rte_dmadev_copy()
> + * @see rte_dmadev_copy_sg()
> + * @see rte_dmadev_fill()
> + */
> +
> +#define RTE_DMA_OP_FLAG_SUBMIT (1ull << 1)
> +/**< DMA submit flag.
> + * It means the operation with this flag must issue doorbell to hardware after
> + * enqueued jobs.
> + */
> +
> +#define RTE_DMA_OP_FLAG_LLC    (1ull << 2)
> +/**< DMA write data to low level cache hint.
> + * Used for performance optimization, this is just a hint, and there is no
> + * capability bit for this, driver should not return error if this flag was set.
> + */
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue a copy operation onto the virtual DMA channel.
> + *
> + * This queues up a copy operation to be performed by hardware, if the 'flags'
> + * parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger doorbell to begin
> + * this operation, otherwise do not trigger doorbell.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + * @param src
> + *   The address of the source buffer.
> + * @param dst
> + *   The address of the destination buffer.
> + * @param length
> + *   The length of the data to be copied.
> + * @param flags
> + *   An flags for this operation.
> + *   @see RTE_DMA_OP_FLAG_*
> + *
> + * @return
> + *   - 0..UINT16_MAX: index of enqueued copy job.
> + *   - <0: Error code returned by the driver copy function.
> + */
> +__rte_experimental
> +static inline int
> +rte_dmadev_copy(uint16_t dev_id, uint16_t vchan, rte_iova_t src, rte_iova_t dst,
> +               uint32_t length, uint64_t flags)
> +{
> +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> +
> +#ifdef RTE_DMADEV_DEBUG
> +       if (!rte_dmadev_is_valid_dev(dev_id) ||
> +           vchan >= dev->data->dev_conf.max_vchans)
> +               return -EINVAL;
> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->copy, -ENOTSUP);
> +#endif
> +
> +       return (*dev->copy)(dev, vchan, src, dst, length, flags);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue a scatter list copy operation onto the virtual DMA channel.
> + *
> + * This queues up a scatter list copy operation to be performed by hardware, if
> + * the 'flags' parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger doorbell
> + * to begin this operation, otherwise do not trigger doorbell.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + * @param src
> + *   The pointer of source scatter entry array.
> + * @param dst
> + *   The pointer of destination scatter entry array.
> + * @param nb_src
> + *   The number of source scatter entry.
> + * @param nb_dst
> + *   The number of destination scatter entry.
> + * @param flags
> + *   An flags for this operation.
> + *   @see RTE_DMA_OP_FLAG_*
> + *
> + * @return
> + *   - 0..UINT16_MAX: index of enqueued copy scatterlist job.
> + *   - <0: Error code returned by the driver copy scatterlist function.
> + */
> +__rte_experimental
> +static inline int
> +rte_dmadev_copy_sg(uint16_t dev_id, uint16_t vchan, struct rte_dmadev_sge *src,
> +                  struct rte_dmadev_sge *dst, uint16_t nb_src, uint16_t nb_dst,
> +                  uint64_t flags)
> +{
> +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> +
> +#ifdef RTE_DMADEV_DEBUG
> +       if (!rte_dmadev_is_valid_dev(dev_id) ||
> +           vchan >= dev->data->dev_conf.max_vchans ||
> +           src == NULL || dst == NULL || nb_src == 0 || nb_dst == 0)
> +               return -EINVAL;
> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->copy_sg, -ENOTSUP);
> +#endif
> +
> +       return (*dev->copy_sg)(dev, vchan, src, dst, nb_src, nb_dst, flags);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Enqueue a fill operation onto the virtual DMA channel.
> + *
> + * This queues up a fill operation to be performed by hardware, if the 'flags'
> + * parameter contains RTE_DMA_OP_FLAG_SUBMIT then trigger doorbell to begin
> + * this operation, otherwise do not trigger doorbell.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + * @param pattern
> + *   The pattern to populate the destination buffer with.
> + * @param dst
> + *   The address of the destination buffer.
> + * @param length
> + *   The length of the destination buffer.
> + * @param flags
> + *   An flags for this operation.
> + *   @see RTE_DMA_OP_FLAG_*
> + *
> + * @return
> + *   - 0..UINT16_MAX: index of enqueued fill job.
> + *   - <0: Error code returned by the driver fill function.
> + */
> +__rte_experimental
> +static inline int
> +rte_dmadev_fill(uint16_t dev_id, uint16_t vchan, uint64_t pattern,
> +               rte_iova_t dst, uint32_t length, uint64_t flags)
> +{
> +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> +
> +#ifdef RTE_DMADEV_DEBUG
> +       if (!rte_dmadev_is_valid_dev(dev_id) ||
> +           vchan >= dev->data->dev_conf.max_vchans)
> +               return -EINVAL;
> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->fill, -ENOTSUP);
> +#endif
> +
> +       return (*dev->fill)(dev, vchan, pattern, dst, length, flags);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Trigger hardware to begin performing enqueued operations.
> + *
> + * This API is used to write the "doorbell" to the hardware to trigger it
> + * to begin the operations previously enqueued by rte_dmadev_copy/fill().
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + *
> + * @return
> + *   - =0: Successfully trigger hardware.
> + *   - <0: Failure to trigger hardware.
> + */
> +__rte_experimental
> +static inline int
> +rte_dmadev_submit(uint16_t dev_id, uint16_t vchan)
> +{
> +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> +
> +#ifdef RTE_DMADEV_DEBUG
> +       if (!rte_dmadev_is_valid_dev(dev_id) ||
> +           vchan >= dev->data->dev_conf.max_vchans)
> +               return -EINVAL;
> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->submit, -ENOTSUP);
> +#endif
> +
> +       return (*dev->submit)(dev, vchan);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Returns the number of operations that have been successfully completed.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + * @param nb_cpls
> + *   The maximum number of completed operations that can be processed.
> + * @param[out] last_idx
> + *   The last completed operation's index.
> + *   If not required, NULL can be passed in.
> + * @param[out] has_error
> + *   Indicates if there are transfer error.
> + *   If not required, NULL can be passed in.
> + *
> + * @return
> + *   The number of operations that successfully completed. This return value
> + *   must be less than or equal to the value of nb_cpls.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_dmadev_completed(uint16_t dev_id, uint16_t vchan, const uint16_t nb_cpls,
> +                    uint16_t *last_idx, bool *has_error)
> +{
> +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> +       uint16_t idx;
> +       bool err;
> +
> +#ifdef RTE_DMADEV_DEBUG
> +       if (!rte_dmadev_is_valid_dev(dev_id) ||
> +           vchan >= dev->data->dev_conf.max_vchans ||
> +           nb_cpls == 0)
> +               return 0;
> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->completed, 0);
> +#endif
> +
> +       /* Ensure the pointer values are non-null to simplify drivers.
> +        * In most cases these should be compile time evaluated, since this is
> +        * an inline function.
> +        * - If NULL is explicitly passed as parameter, then compiler knows the
> +        *   value is NULL
> +        * - If address of local variable is passed as parameter, then compiler
> +        *   can know it's non-NULL.
> +        */
> +       if (last_idx == NULL)
> +               last_idx = &idx;
> +       if (has_error == NULL)
> +               has_error = &err;
> +
> +       *has_error = false;
> +       return (*dev->completed)(dev, vchan, nb_cpls, last_idx, has_error);
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Returns the number of operations that have been completed, and the
> + * operations result may succeed or fail.
> + *
> + * @param dev_id
> + *   The identifier of the device.
> + * @param vchan
> + *   The identifier of virtual DMA channel.
> + * @param nb_cpls
> + *   Indicates the size of status array.
> + * @param[out] last_idx
> + *   The last completed operation's index.
> + *   If not required, NULL can be passed in.
> + * @param[out] status
> + *   The error code of operations that completed.
> + *   @see enum rte_dma_status_code
> + *
> + * @return
> + *   The number of operations that completed. This return value must be less
> + *   than or equal to the value of nb_cpls.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_dmadev_completed_status(uint16_t dev_id, uint16_t vchan,
> +                           const uint16_t nb_cpls, uint16_t *last_idx,
> +                           enum rte_dma_status_code *status)
> +{
> +       struct rte_dmadev *dev = &rte_dmadevices[dev_id];
> +       uint16_t idx;
> +
> +#ifdef RTE_DMADEV_DEBUG
> +       if (!rte_dmadev_is_valid_dev(dev_id) ||
> +           vchan >= dev->data->dev_conf.max_vchans ||
> +           nb_cpls == 0 || status == NULL)
> +               return 0;
> +       RTE_FUNC_PTR_OR_ERR_RET(*dev->completed_status, 0);
> +#endif
> +
> +       if (last_idx == NULL)
> +               last_idx = &idx;
> +
> +       return (*dev->completed_status)(dev, vchan, nb_cpls, last_idx, status);
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  @ 2021-07-19  6:58  0% ` Xueming(Steven) Li
  2021-07-19  8:46  0%   ` Andrew Rybchenko
  2021-07-29  4:20  0% ` Xueming(Steven) Li
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-07-19  6:58 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Tuesday, July 13, 2021 12:18 AM
> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur <somnath.kotur@broadcom.com>; John Daley
> <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>; Qiming Yang
> <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Xueming(Steven) Li <xuemingl@nvidia.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> Subject: [PATCH] ethdev: fix representor port ID search by name
> 
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> 
> Fix representor port ID search by name if the representor itself does not provide representors info. Getting a list of representors from
> a representor does not make sense. Instead, a parent device should be used.
> 
> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
> 
> Fixes: df7547a6a2cc ("ethdev: add helper function to get representor ID")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI check is disabled for the structure because of the libabigail bug [1].
> 
> Potentially it is bad for out-of-tree drivers which implement representors but do not fill in a new parert_port_id field in
> rte_eth_dev_data structure. Do we care?
> 
> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
> 
> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
> 
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> 
>  drivers/net/bnxt/bnxt_reps.c             |  1 +
>  drivers/net/enic/enic_vf_representor.c   |  1 +
>  drivers/net/i40e/i40e_vf_representor.c   |  1 +
>  drivers/net/ice/ice_dcf_vf_representor.c |  1 +  drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
>  drivers/net/mlx5/linux/mlx5_os.c         | 11 +++++++++++
>  drivers/net/mlx5/windows/mlx5_os.c       | 11 +++++++++++
>  lib/ethdev/ethdev_driver.h               |  6 +++---
>  lib/ethdev/rte_class_eth.c               |  2 +-
>  lib/ethdev/rte_ethdev.c                  |  8 ++++----
>  lib/ethdev/rte_ethdev_core.h             |  4 ++++
>  11 files changed, 39 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c index bdbad53b7d..902591cd39 100644
> --- a/drivers/net/bnxt/bnxt_reps.c
> +++ b/drivers/net/bnxt/bnxt_reps.c
> @@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
>  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	eth_dev->data->representor_id = rep_params->vf_id;
> +	eth_dev->data->parent_port_id = rep_params->parent_dev->data->port_id;
> 
>  	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
>  	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr, diff --git a/drivers/net/enic/enic_vf_representor.c
> b/drivers/net/enic/enic_vf_representor.c
> index 79dd6e5640..6ee7967ce9 100644
> --- a/drivers/net/enic/enic_vf_representor.c
> +++ b/drivers/net/enic/enic_vf_representor.c
> @@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
>  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	eth_dev->data->representor_id = vf->vf_id;
> +	eth_dev->data->parent_port_id = pf->port_id;
>  	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
>  		sizeof(struct rte_ether_addr) *
>  		ENIC_UNICAST_PERFECT_FILTERS, 0);
> diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
> index 0481b55381..865b637585 100644
> --- a/drivers/net/i40e/i40e_vf_representor.c
> +++ b/drivers/net/i40e/i40e_vf_representor.c
> @@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
>  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->parent_port_id = pf->dev_data->parent_port_id;
> 
>  	/* Setting the number queues allocated to the VF */
>  	ethdev->data->nb_rx_queues = vf->vsi->nb_qps; diff --git a/drivers/net/ice/ice_dcf_vf_representor.c
> b/drivers/net/ice/ice_dcf_vf_representor.c
> index 970461f3e9..c7cd3fd290 100644
> --- a/drivers/net/ice/ice_dcf_vf_representor.c
> +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> @@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
> 
>  	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  	vf_rep_eth_dev->data->representor_id = repr->vf_id;
> +	vf_rep_eth_dev->data->parent_port_id =
> +repr->dcf_eth_dev->data->port_id;
> 
>  	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
> 
> diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
> index d5b636a194..7a2063849e 100644
> --- a/drivers/net/ixgbe/ixgbe_vf_representor.c
> +++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
> @@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
> 
>  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->parent_port_id = representor->pf_ethdev->data->port_id;
> 
>  	/* Set representor device ops */
>  	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops; diff --git a/drivers/net/mlx5/linux/mlx5_os.c
> b/drivers/net/mlx5/linux/mlx5_os.c
> index be22d9cbd2..5550d30628 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -1511,6 +1511,17 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>  	if (priv->representor) {
>  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
> +			const struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +
> +			if (!opriv ||
> +			    opriv->sh != priv->sh ||
> +			    opriv->representor)
> +				continue;
> +			eth_dev->data->parent_port_id = port_id;
> +			break;
> +		}
>  	}
>  	priv->mp_id.port_id = eth_dev->data->port_id;
>  	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); diff --git a/drivers/net/mlx5/windows/mlx5_os.c
> b/drivers/net/mlx5/windows/mlx5_os.c
> index e30b682822..037c928dc1 100644
> --- a/drivers/net/mlx5/windows/mlx5_os.c
> +++ b/drivers/net/mlx5/windows/mlx5_os.c
> @@ -506,6 +506,17 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>  	if (priv->representor) {
>  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
> +			const struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +
> +			if (!opriv ||
> +			    opriv->sh != priv->sh ||
> +			    opriv->representor)
> +				continue;
> +			eth_dev->data->parent_port_id = port_id;
> +			break;
> +		}
>  	}
>  	/*
>  	 * Store associated network device interface index. This index diff --git a/lib/ethdev/ethdev_driver.h
> b/lib/ethdev/ethdev_driver.h index 40e474aa7e..07f6d1f9a4 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
>   * For backward compatibility, if no representor info, direct
>   * map legacy VF (no controller and pf).
>   *
> - * @param ethdev
> - *  Handle of ethdev port.
> + * @param parent_port_id
> + *  Port ID of the backing device.
>   * @param type
>   *  Representor type.
>   * @param controller
> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
>   */
>  __rte_internal
>  int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t parent_port_id,

It make more sense to get representor info from parent port. Representor is a member of switch domain, PMD owns 
the information of  the representor owner port and info of representors. This change looks better, but not sure
whether it valuable to introduce a new member to the EAL data structure.

>  			   enum rte_eth_representor_type type,
>  			   int controller, int pf, int representor_port,
>  			   uint16_t *repr_id);
> diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index 1fe5fa1f36..e3b7ab9728 100644
> --- a/lib/ethdev/rte_class_eth.c
> +++ b/lib/ethdev/rte_class_eth.c
> @@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
>  		c = i / (np * nf);
>  		p = (i / nf) % np;
>  		f = i % nf;
> -		if (rte_eth_representor_id_get(edev,
> +		if (rte_eth_representor_id_get(edev->data->parent_port_id,
>  			eth_da.type,
>  			eth_da.nb_mh_controllers == 0 ? -1 :
>  					eth_da.mh_controllers[c],
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 6ebf52b641..acda1d43fb 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -5997,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)  }
> 
>  int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t parent_port_id,
>  			   enum rte_eth_representor_type type,
>  			   int controller, int pf, int representor_port,
>  			   uint16_t *repr_id)
> @@ -6012,7 +6012,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>  		return -EINVAL;
> 
>  	/* Get PMD representor range info. */
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
> +	ret = rte_eth_representor_info_get(parent_port_id, NULL);
>  	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
>  	    controller == -1 && pf == -1) {
>  		/* Direct mapping for legacy VF representor. */ @@ -6026,7 +6026,7 @@ rte_eth_representor_id_get(const struct
> rte_eth_dev *ethdev,
>  	info = calloc(1, size);
>  	if (info == NULL)
>  		return -ENOMEM;
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
> +	ret = rte_eth_representor_info_get(parent_port_id, info);
>  	if (ret < 0)
>  		goto out;
> 
> @@ -6045,7 +6045,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>  			continue;
>  		if (info->ranges[i].id_end < info->ranges[i].id_base) {
>  			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
> -				ethdev->data->port_id, info->ranges[i].id_base,
> +				parent_port_id, info->ranges[i].id_base,
>  				info->ranges[i].id_end, i);
>  			continue;
> 
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index edf96de2dc..13cb84b52f 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -185,6 +185,10 @@ struct rte_eth_dev_data {
>  			/**< Switch-specific identifier.
>  			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
>  			 */
> +	uint16_t parent_port_id;
> +			/**< Port ID of the backing device.
> +			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> +			 */
> 
>  	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
>  	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> --
> 2.30.2


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-07-19  6:58  0% ` Xueming(Steven) Li
@ 2021-07-19  8:46  0%   ` Andrew Rybchenko
  2021-07-19 11:54  0%     ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-07-19  8:46 UTC (permalink / raw)
  To: Xueming(Steven) Li, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable

On 7/19/21 9:58 AM, Xueming(Steven) Li wrote:
> 
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Tuesday, July 13, 2021 12:18 AM
>> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur <somnath.kotur@broadcom.com>; John Daley
>> <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>; Qiming Yang
>> <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad
>> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
>> Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Xueming(Steven) Li <xuemingl@nvidia.com>
>> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>> Subject: [PATCH] ethdev: fix representor port ID search by name
>>
>> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>>
>> Fix representor port ID search by name if the representor itself does not provide representors info. Getting a list of representors from
>> a representor does not make sense. Instead, a parent device should be used.
>>
>> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
>>
>> Fixes: df7547a6a2cc ("ethdev: add helper function to get representor ID")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> ---
>> The new field is added into the hole in rte_eth_dev_data structure.
>> The patch does not change ABI, but extra care is required since ABI check is disabled for the structure because of the libabigail bug [1].
>>
>> Potentially it is bad for out-of-tree drivers which implement representors but do not fill in a new parert_port_id field in
>> rte_eth_dev_data structure. Do we care?
>>
>> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
>>
>> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
>>
>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060

[snip]

>> --- a/lib/ethdev/ethdev_driver.h
>> +++ b/lib/ethdev/ethdev_driver.h
>> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
>>    * For backward compatibility, if no representor info, direct
>>    * map legacy VF (no controller and pf).
>>    *
>> - * @param ethdev
>> - *  Handle of ethdev port.
>> + * @param parent_port_id
>> + *  Port ID of the backing device.
>>    * @param type
>>    *  Representor type.
>>    * @param controller
>> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
>>    */
>>   __rte_internal
>>   int
>> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>> +rte_eth_representor_id_get(uint16_t parent_port_id,
> 
> It make more sense to get representor info from parent port. Representor is a member of switch domain, PMD owns
> the information of  the representor owner port and info of representors. This change looks better, but not sure
> whether it valuable to introduce a new member to the EAL data structure.

IMHO, it is simply incorrect to return representors info on a
representor itself. Representor info is an information which
representors may be populated using the device.

If above statement is correct, we need a way to get parent device
by representor to do name to representor ID mapping. I see two
options to do it:
  A. Dedicated field in rte_eth_dev_data as the patch does.
  B. Dedicated ethdev op (since representor knows parent port ID anyway).
We have chosen (A) because of simplicity.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-07-19  8:46  0%   ` Andrew Rybchenko
@ 2021-07-19 11:54  0%     ` Xueming(Steven) Li
  2021-07-19 12:36  0%       ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-07-19 11:54 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, July 19, 2021 4:46 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang
> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> 
> On 7/19/21 9:58 AM, Xueming(Steven) Li wrote:
> >
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Tuesday, July 13, 2021 12:18 AM
> >> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> >> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong
> >> Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>;
> >> Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>;
> >> Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> >> Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> >> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> >> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
> >> Xueming(Steven) Li <xuemingl@nvidia.com>
> >> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >> Subject: [PATCH] ethdev: fix representor port ID search by name
> >>
> >> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> >>
> >> Fix representor port ID search by name if the representor itself does
> >> not provide representors info. Getting a list of representors from a representor does not make sense. Instead, a parent device
> should be used.
> >>
> >> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
> >>
> >> Fixes: df7547a6a2cc ("ethdev: add helper function to get representor
> >> ID")
> >> Cc: stable@dpdk.org
> >>
> >> Signed-off-by: Viacheslav Galaktionov
> >> <viacheslav.galaktionov@oktetlabs.ru>
> >> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> ---
> >> The new field is added into the hole in rte_eth_dev_data structure.
> >> The patch does not change ABI, but extra care is required since ABI check is disabled for the structure because of the libabigail bug
> [1].
> >>
> >> Potentially it is bad for out-of-tree drivers which implement
> >> representors but do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we care?
> >>
> >> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
> >>
> >> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
> >>
> >> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> 
> [snip]
> 
> >> --- a/lib/ethdev/ethdev_driver.h
> >> +++ b/lib/ethdev/ethdev_driver.h
> >> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
> >>    * For backward compatibility, if no representor info, direct
> >>    * map legacy VF (no controller and pf).
> >>    *
> >> - * @param ethdev
> >> - *  Handle of ethdev port.
> >> + * @param parent_port_id
> >> + *  Port ID of the backing device.
> >>    * @param type
> >>    *  Representor type.
> >>    * @param controller
> >> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
> >>    */
> >>   __rte_internal
> >>   int
> >> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> >> +rte_eth_representor_id_get(uint16_t parent_port_id,
> >
> > It make more sense to get representor info from parent port.
> > Representor is a member of switch domain, PMD owns the information of
> > the representor owner port and info of representors. This change looks better, but not sure whether it valuable to introduce a new
> member to the EAL data structure.
> 
> IMHO, it is simply incorrect to return representors info on a representor itself. Representor info is an information which representors
> may be populated using the device.
> 
> If above statement is correct, we need a way to get parent device by representor to do name to representor ID mapping. I see two
> options to do it:
>   A. Dedicated field in rte_eth_dev_data as the patch does.
>   B. Dedicated ethdev op (since representor knows parent port ID anyway).
> We have chosen (A) because of simplicity.

Just recalled that representor port could be probed w/o owner PF, is a force for parent port?

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-07-19 11:54  0%     ` Xueming(Steven) Li
@ 2021-07-19 12:36  0%       ` Andrew Rybchenko
  2021-07-19 12:50  0%         ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-07-19 12:36 UTC (permalink / raw)
  To: Xueming(Steven) Li, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable

On 7/19/21 2:54 PM, Xueming(Steven) Li wrote:
> 
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, July 19, 2021 4:46 PM
>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang
>> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
>> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
>>
>> On 7/19/21 9:58 AM, Xueming(Steven) Li wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Tuesday, July 13, 2021 12:18 AM
>>>> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
>>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong
>>>> Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>;
>>>> Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>;
>>>> Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
>>>> Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
>>>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
>>>> Xueming(Steven) Li <xuemingl@nvidia.com>
>>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
>>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>>>> Subject: [PATCH] ethdev: fix representor port ID search by name
>>>>
>>>> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>>>>
>>>> Fix representor port ID search by name if the representor itself does
>>>> not provide representors info. Getting a list of representors from a representor does not make sense. Instead, a parent device
>> should be used.
>>>>
>>>> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
>>>>
>>>> Fixes: df7547a6a2cc ("ethdev: add helper function to get representor
>>>> ID")
>>>> Cc: stable@dpdk.org
>>>>
>>>> Signed-off-by: Viacheslav Galaktionov
>>>> <viacheslav.galaktionov@oktetlabs.ru>
>>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> ---
>>>> The new field is added into the hole in rte_eth_dev_data structure.
>>>> The patch does not change ABI, but extra care is required since ABI check is disabled for the structure because of the libabigail bug
>> [1].
>>>>
>>>> Potentially it is bad for out-of-tree drivers which implement
>>>> representors but do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we care?
>>>>
>>>> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
>>>>
>>>> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
>>>>
>>>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>>
>> [snip]
>>
>>>> --- a/lib/ethdev/ethdev_driver.h
>>>> +++ b/lib/ethdev/ethdev_driver.h
>>>> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
>>>>     * For backward compatibility, if no representor info, direct
>>>>     * map legacy VF (no controller and pf).
>>>>     *
>>>> - * @param ethdev
>>>> - *  Handle of ethdev port.
>>>> + * @param parent_port_id
>>>> + *  Port ID of the backing device.
>>>>     * @param type
>>>>     *  Representor type.
>>>>     * @param controller
>>>> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
>>>>     */
>>>>    __rte_internal
>>>>    int
>>>> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>>>> +rte_eth_representor_id_get(uint16_t parent_port_id,
>>>
>>> It make more sense to get representor info from parent port.
>>> Representor is a member of switch domain, PMD owns the information of
>>> the representor owner port and info of representors. This change looks better, but not sure whether it valuable to introduce a new
>> member to the EAL data structure.
>>
>> IMHO, it is simply incorrect to return representors info on a representor itself. Representor info is an information which representors
>> may be populated using the device.
>>
>> If above statement is correct, we need a way to get parent device by representor to do name to representor ID mapping. I see two
>> options to do it:
>>    A. Dedicated field in rte_eth_dev_data as the patch does.
>>    B. Dedicated ethdev op (since representor knows parent port ID anyway).
>> We have chosen (A) because of simplicity.
> 
> Just recalled that representor port could be probed w/o owner PF, is a force for parent port?

I thought that it is impossible and parent port is absolutely required
for a representor. Could you provide an example and explain how will it
work?


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-07-19 12:36  0%       ` Andrew Rybchenko
@ 2021-07-19 12:50  0%         ` Xueming(Steven) Li
  2021-07-20  8:59  0%           ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-07-19 12:50 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Monday, July 19, 2021 8:36 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang
> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> 
> On 7/19/21 2:54 PM, Xueming(Steven) Li wrote:
> >
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Monday, July 19, 2021 4:46 PM
> >> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde
> >> <ajit.khaparde@broadcom.com>; Somnath Kotur
> >> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong
> >> Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>;
> >> Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>;
> >> Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> >> Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> >> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> >> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> >> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> >>
> >> On 7/19/21 9:58 AM, Xueming(Steven) Li wrote:
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> Sent: Tuesday, July 13, 2021 12:18 AM
> >>>> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> >>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>;
> >>>> Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> >>>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi
> >>>> Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>;
> >>>> Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
> >>>> Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
> >>>> Monjalon <thomas@monjalon.net>; Ferruh Yigit
> >>>> <ferruh.yigit@intel.com>;
> >>>> Xueming(Steven) Li <xuemingl@nvidia.com>
> >>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >>>> Subject: [PATCH] ethdev: fix representor port ID search by name
> >>>>
> >>>> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> >>>>
> >>>> Fix representor port ID search by name if the representor itself
> >>>> does not provide representors info. Getting a list of representors
> >>>> from a representor does not make sense. Instead, a parent device
> >> should be used.
> >>>>
> >>>> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
> >>>>
> >>>> Fixes: df7547a6a2cc ("ethdev: add helper function to get
> >>>> representor
> >>>> ID")
> >>>> Cc: stable@dpdk.org
> >>>>
> >>>> Signed-off-by: Viacheslav Galaktionov
> >>>> <viacheslav.galaktionov@oktetlabs.ru>
> >>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> ---
> >>>> The new field is added into the hole in rte_eth_dev_data structure.
> >>>> The patch does not change ABI, but extra care is required since ABI
> >>>> check is disabled for the structure because of the libabigail bug
> >> [1].
> >>>>
> >>>> Potentially it is bad for out-of-tree drivers which implement
> >>>> representors but do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we care?
> >>>>
> >>>> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
> >>>>
> >>>> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
> >>>>
> >>>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> >>
> >> [snip]
> >>
> >>>> --- a/lib/ethdev/ethdev_driver.h
> >>>> +++ b/lib/ethdev/ethdev_driver.h
> >>>> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
> >>>>     * For backward compatibility, if no representor info, direct
> >>>>     * map legacy VF (no controller and pf).
> >>>>     *
> >>>> - * @param ethdev
> >>>> - *  Handle of ethdev port.
> >>>> + * @param parent_port_id
> >>>> + *  Port ID of the backing device.
> >>>>     * @param type
> >>>>     *  Representor type.
> >>>>     * @param controller
> >>>> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
> >>>>     */
> >>>>    __rte_internal
> >>>>    int
> >>>> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> >>>> +rte_eth_representor_id_get(uint16_t parent_port_id,
> >>>
> >>> It make more sense to get representor info from parent port.
> >>> Representor is a member of switch domain, PMD owns the information
> >>> of the representor owner port and info of representors. This change
> >>> looks better, but not sure whether it valuable to introduce a new
> >> member to the EAL data structure.
> >>
> >> IMHO, it is simply incorrect to return representors info on a
> >> representor itself. Representor info is an information which representors may be populated using the device.
> >>
> >> If above statement is correct, we need a way to get parent device by
> >> representor to do name to representor ID mapping. I see two options to do it:
> >>    A. Dedicated field in rte_eth_dev_data as the patch does.
> >>    B. Dedicated ethdev op (since representor knows parent port ID anyway).
> >> We have chosen (A) because of simplicity.
> >
> > Just recalled that representor port could be probed w/o owner PF, is a force for parent port?
> 
> I thought that it is impossible and parent port is absolutely required for a representor. Could you provide an example and explain how
> will it work?

In case of bonding, PF0 and PF1 become one PF port `bond0`, PCI address is PF0.
	-a <PF0>,representor=pf[0-1]vf[0-99] // this is the syntax we proposed.

To be backward compatible, also support the following 2 devargs:
	-a <pf0>,representor=[0-99] // probe bond0 and representor on pf0
	-a <pf1>,representor=[0-99] // probe representors on pf1.
If devargs start with PF1 devargs, no owner PF1 created as it disabled in bonding. Can't create bond0(PF0) automatically here as 
device is located by PCI address(PF1) from devargs.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] eventdev: configure the Rx event buffer size
  @ 2021-07-19 15:26  3%   ` Kundapura, Ganapati
  2021-07-19 16:13  3%     ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Kundapura, Ganapati @ 2021-07-19 15:26 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Jayatheerthan, Jay, dpdk-dev

Hi Jerin,
   Please find my response in lined.

-----Original Message-----
From: Jerin Jacob <jerinjacobk@gmail.com> 
Sent: 19 July 2021 12:14
To: Kundapura, Ganapati <ganapati.kundapura@intel.com>
Cc: Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; dpdk-dev <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH] eventdev: configure the Rx event buffer size

On Fri, Jul 16, 2021 at 10:33 PM Ganapati Kundapura <ganapati.kundapura@intel.com> wrote:
>
> As of now Rx event buffer size is static and set to 128.
>
> This patch sets the Rx event buffer size to 192, configurable at 
> compile time and also errors out at run time if Rx event buffer size 
> is configured more than 16 bits.
>
> Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
> ---
>  config/rte_config.h                     |  1 +
>  lib/eventdev/rte_event_eth_rx_adapter.c | 14 +++++++++++++-
>  2 files changed, 14 insertions(+), 1 deletion(-)
>
> diff --git a/config/rte_config.h b/config/rte_config.h index 
> 590903c..3d938c8 100644
> --- a/config/rte_config.h
> +++ b/config/rte_config.h
> @@ -77,6 +77,7 @@
>  #define RTE_EVENT_ETH_INTR_RING_SIZE 1024  #define 
> RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32  #define 
> RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
> +#define RTE_EVENT_ETH_RX_ADAPTER_BUFFER_SIZE 128

We are limiting any configuration to rte_config.h file.
Could you make it dynamic with the default value and application can pass the value kind of scheme?
[Ganapati] 
Making the Rx event buffer size dynamic seems to be a good idea but in case of rx adapter,
either passing event buffer size to adapter create api requires api signature change which breaks ABI
or by adding event buffer size in port_config parameter which comes from eventdev 
to adapter create function is not scalable as user can also call create_ext() with its own callback 
and parameter to callback is void * which is interpreted by user space callback function.

I think one way to do the event buffer size dynamic is to add new api to set the event buffer size.
If called, it'll set the event buffer size to the value passed otherwise rx adapter instance create api will do with 
default value. 

Let me know your opinion on this.


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] eventdev: configure the Rx event buffer size
  2021-07-19 15:26  3%   ` Kundapura, Ganapati
@ 2021-07-19 16:13  3%     ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2021-07-19 16:13 UTC (permalink / raw)
  To: Kundapura, Ganapati; +Cc: Jayatheerthan, Jay, dpdk-dev

On Mon, Jul 19, 2021 at 8:57 PM Kundapura, Ganapati
<ganapati.kundapura@intel.com> wrote:
>
> Hi Jerin,

HI Ganapati

>    Please find my response in lined.
>
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: 19 July 2021 12:14
> To: Kundapura, Ganapati <ganapati.kundapura@intel.com>
> Cc: Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; dpdk-dev <dev@dpdk.org>
> Subject: Re: [dpdk-dev] [PATCH] eventdev: configure the Rx event buffer size
>
> On Fri, Jul 16, 2021 at 10:33 PM Ganapati Kundapura <ganapati.kundapura@intel.com> wrote:
> >
> > As of now Rx event buffer size is static and set to 128.
> >
> > This patch sets the Rx event buffer size to 192, configurable at
> > compile time and also errors out at run time if Rx event buffer size
> > is configured more than 16 bits.
> >
> > Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com>
> > ---
> >  config/rte_config.h                     |  1 +
> >  lib/eventdev/rte_event_eth_rx_adapter.c | 14 +++++++++++++-
> >  2 files changed, 14 insertions(+), 1 deletion(-)
> >
> > diff --git a/config/rte_config.h b/config/rte_config.h index
> > 590903c..3d938c8 100644
> > --- a/config/rte_config.h
> > +++ b/config/rte_config.h
> > @@ -77,6 +77,7 @@
> >  #define RTE_EVENT_ETH_INTR_RING_SIZE 1024  #define
> > RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32  #define
> > RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32
> > +#define RTE_EVENT_ETH_RX_ADAPTER_BUFFER_SIZE 128
>
> We are limiting any configuration to rte_config.h file.
> Could you make it dynamic with the default value and application can pass the value kind of scheme?
> [Ganapati]
> Making the Rx event buffer size dynamic seems to be a good idea but in case of rx adapter,
> either passing event buffer size to adapter create api requires api signature change which breaks ABI
> or by adding event buffer size in port_config parameter which comes from eventdev
> to adapter create function is not scalable as user can also call create_ext() with its own callback
> and parameter to callback is void * which is interpreted by user space callback function.
>
> I think one way to do the event buffer size dynamic is to add new api to set the event buffer size.
> If called, it'll set the event buffer size to the value passed otherwise rx adapter instance create api will do with
> default value.
>
> Let me know your opinion on this.

we can break the ABI in v21.11 so create API config structure can change.
Please send depreciation notice and submit the implementation for 21.11,

>

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-stable] [PATCH v4] app/testpmd: fix testpmd doesn't show RSS hash offload
       [not found]             ` <DM8PR11MB5639C757A790F65CBFB647C2D1E19@DM8PR11MB5639.namprd11.prod.outlook.com>
@ 2021-07-19 16:18  0%           ` Ferruh Yigit
  2021-07-22 11:03  0%             ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-07-19 16:18 UTC (permalink / raw)
  To: Wang, Jie1X, Li, Xiaoyun, dev; +Cc: andrew.rybchenko, stable

On 7/19/2021 10:55 AM, Wang, Jie1X wrote:
> 
> 
>> -----Original Message-----
>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>> Sent: Friday, July 16, 2021 4:52 PM
>> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Wang, Jie1X <jie1x.wang@intel.com>;
>> dev@dpdk.org
>> Cc: andrew.rybchenko@oktetlabs.ru; stable@dpdk.org
>> Subject: Re: [dpdk-stable] [PATCH v4] app/testpmd: fix testpmd doesn't show
>> RSS hash offload
>>
>> On 7/16/2021 9:30 AM, Li, Xiaoyun wrote:
>>>> -----Original Message-----
>>>> From: stable <stable-bounces@dpdk.org> On Behalf Of Li, Xiaoyun
>>>> Sent: Thursday, July 15, 2021 12:54
>>>> To: Wang, Jie1X <jie1x.wang@intel.com>; dev@dpdk.org
>>>> Cc: andrew.rybchenko@oktetlabs.ru; stable@dpdk.org
>>>> Subject: Re: [dpdk-stable] [PATCH v4] app/testpmd: fix testpmd
>>>> doesn't show RSS hash offload
>>>>
>>>>> -----Original Message-----
>>>>> From: Wang, Jie1X <jie1x.wang@intel.com>
>>>>> Sent: Thursday, July 15, 2021 19:57
>>>>> To: dev@dpdk.org
>>>>> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>;
>>>>> andrew.rybchenko@oktetlabs.ru; Wang, Jie1X <jie1x.wang@intel.com>;
>>>>> stable@dpdk.org
>>>>> Subject: [PATCH v4] app/testpmd: fix testpmd doesn't show RSS hash
>>>>> offload
>>>>>
>>>>> The driver may change offloads info into dev->data->dev_conf in
>>>>> dev_configure which may cause port->dev_conf and port->rx_conf
>>>>> contain
>>>> outdated values.
>>>>>
>>>>> This patch updates the offloads info if it changes to fix this issue.
>>>>>
>>>>> Fixes: ce8d561418d4 ("app/testpmd: add port configuration settings")
>>>>> Cc: stable@dpdk.org
>>>>>
>>>>> Signed-off-by: Jie Wang <jie1x.wang@intel.com>
>>>>> ---
>>>>> v4: delete the whitespace at the end of the line.
>>>>> v3:
>>>>>  - check and update the "offloads" of "port->dev_conf.rx/txmode".
>>>>>  - update the commit log.
>>>>> v2: copy "rx/txmode.offloads", instead of copying the entire struct
>>>>> "dev->data-
>>>>>> dev_conf.rx/txmode".
>>>>> ---
>>>>>  app/test-pmd/testpmd.c | 27 +++++++++++++++++++++++++++
>>>>>  1 file changed, 27 insertions(+)
>>>>
>>>> Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
>>>
>>> Although I gave my ack, app shouldn't touch rte_eth_devices which this patch
>> does. Usually, testpmd should only call function like
>> eth_dev_info_get_print_err().
>>> But dev_info doesn't contain the info dev->data->dev_conf which the driver
>> modifies.
>>>
>>> Probably we need a better fix.
>>>
>>
>> Agree, an application accessing directly to 'rte_eth_devices' is sign of something
>> missing/wrong.
>>
>> In this case there is no way for application to know what is the configured
>> offload settings per port and queue. Which is missing part I think.
>>
>> As you said normally we get data from PMD mainly via 'rte_eth_dev_info_get()',
>> which is an overloaded function, it provides many different things, like driver
>> default values, limitations, current config/status, capabilities etc...
>>
>> So I think we can do a few things:
>> 1) Add current offload configuration to 'rte_eth_dev_info_get()', so application
>> can get it and use it.
>> The advantage is this API already called many places, many times, so there is a
>> big chance that application already have this information when it needs.
>> Disadvantage is, as mentioned above the API already big and messy, making it
>> bigger makes more error prone and makes easier to break ABI.
>>
> I prefer to choose the 1st suggestion. 
> 
> Normally PMD gets data via 'rte_eth_dev_info_get()'. When we add offloads configuration 
> to it, we can get offloads as same as getting other info.
> 

Most probably it is easier to implement 1), I see your point but as said before
I think 'rte_eth_dev_info_get()' is already messy and I am worried to make it
even bigger.

I prefer option 2).

@Thomas, @Andrew, what do you think?


>> 2) Add a new API to get configured offload information, so a specific API for it.
>>
>> 3) Get a more generic API to get configured config (dev_conf) which will cover
>> offloads too.
>> Disadvantage can be leaking out too many internal config to user unintentionally.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] eal: fix argument to rte_bsf32_safe
  @ 2021-07-19 17:15  0% ` Tyler Retzlaff
  2021-07-19 22:00  3%   ` Stephen Hemminger
  2021-07-23  0:52  8% ` [dpdk-dev] [PATCH v2] " Stephen Hemminger
  2021-07-23 15:45  8% ` [dpdk-dev] [PATCH v3] " Stephen Hemminger
  2 siblings, 1 reply; 200+ results
From: Tyler Retzlaff @ 2021-07-19 17:15 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, anatoly.burakov

On Tue, Jul 13, 2021 at 01:12:21PM -0700, Stephen Hemminger wrote:
> The first argument to rte_bsf32_safe was incorrectly declared as
> a 64 bit value. This function only correctly handles on 32 bit values
> and the underlying function rte_bsf32 only accepts 32 bit values.
> This was introduced when the safe version was added and probably cause
> by copy/paste from the 64 bit version.

there are multiple errors in this family of functions [1] both in usage
and signatures. we previously discussed rolling all fixes up into a single
patch and announcing an api break.

a doc patch was submitted as per the process documented for breaking api
but received no replies [2]

i have a full patch that corrects the whole family if you would like to
take it instead. contact me offline if you are interested.

1. http://mails.dpdk.org/archives/dev/2021-March/201590.html
2. http://mails.dpdk.org/archives/dev/2021-March/201868.html

the change stand-alone is correct so

Acked-By: Tyler Retzlaff <roretzla@linux.microsoft.com>

> 
> The bug passed silently under the radar until some other code was
> built with -Wall and -Wextra in C++ and C++ complains about the
> missing cast.
> 
> Yes, this is a API signature change, but the original code was wrong.
> It is an inline so not an ABI change.
> 
> Fixes: 4e261f551986 ("eal: add 64-bit bsf and 32-bit safe bsf functions")
> Cc: anatoly.burakov@intel.com
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> ---
>  lib/eal/include/rte_common.h | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
> index d5a32c66a5fe..99eb5f1820ae 100644
> --- a/lib/eal/include/rte_common.h
> +++ b/lib/eal/include/rte_common.h
> @@ -623,7 +623,7 @@ rte_bsf32(uint32_t v)
>   *     Returns 0 if ``v`` was 0, otherwise returns 1.
>   */
>  static inline int
> -rte_bsf32_safe(uint64_t v, uint32_t *pos)
> +rte_bsf32_safe(uint32_t v, uint32_t *pos)
>  {
>  	if (v == 0)
>  		return 0;
> -- 
> 2.30.2

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] eal: fix argument to rte_bsf32_safe
  2021-07-19 17:15  0% ` Tyler Retzlaff
@ 2021-07-19 22:00  3%   ` Stephen Hemminger
  2021-07-20 13:26  0%     ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-07-19 22:00 UTC (permalink / raw)
  To: Tyler Retzlaff; +Cc: dev, anatoly.burakov

On Mon, 19 Jul 2021 10:15:34 -0700
Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:

> On Tue, Jul 13, 2021 at 01:12:21PM -0700, Stephen Hemminger wrote:
> > The first argument to rte_bsf32_safe was incorrectly declared as
> > a 64 bit value. This function only correctly handles on 32 bit values
> > and the underlying function rte_bsf32 only accepts 32 bit values.
> > This was introduced when the safe version was added and probably cause
> > by copy/paste from the 64 bit version.  
> 
> there are multiple errors in this family of functions [1] both in usage
> and signatures. we previously discussed rolling all fixes up into a single
> patch and announcing an api break.
> 
> a doc patch was submitted as per the process documented for breaking api
> but received no replies [2]
> 
> i have a full patch that corrects the whole family if you would like to
> take it instead. contact me offline if you are interested.
> 
> 1. http://mails.dpdk.org/archives/dev/2021-March/201590.html
> 2. http://mails.dpdk.org/archives/dev/2021-March/201868.html
> 
> the change stand-alone is correct so
> 
> Acked-By: Tyler Retzlaff <roretzla@linux.microsoft.com>

Thanks, I think the larger set should go into 21.11 where API/ABI break
would be ok. My bit was all about fixing the bug where current code
breaks C++ users.


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH 0/2] Improvements to rte_security
@ 2021-07-20  5:46  3% Anoob Joseph
  0 siblings, 0 replies; 200+ results
From: Anoob Joseph @ 2021-07-20  5:46 UTC (permalink / raw)
  To: Akhil Goyal, Declan Doherty, Fan Zhang, Konstantin Ananyev
  Cc: Anoob Joseph, Jerin Jacob, Ankur Dwivedi, Tejasree Kondoj, dev

Add options for offloading
- IV generation
- SA lifetime

With lookaside protocol (IPsec) offloads, application is expected to
provide IV in rte_crypto_op. For cryptodevs which can generate true
random, this operation can be offloaded.

SA lifetime is used in tracking SA expiries and initiating SA renegotiation.
For cryptodevs which can track expiries, this operation can be offloaded.

This patchset introduces ABI breakages and is intended for 21.11 release

Anoob Joseph (2):
  lib/security: add IV generation
  lib/security: add SA lifetime configuration

 examples/ipsec-secgw/ipsec.c |  2 +-
 examples/ipsec-secgw/ipsec.h |  2 +-
 lib/cryptodev/rte_crypto.h   |  7 +++++++
 lib/security/rte_security.h  | 42 ++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 49 insertions(+), 4 deletions(-)

-- 
2.7.4


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-07-19 12:50  0%         ` Xueming(Steven) Li
@ 2021-07-20  8:59  0%           ` Andrew Rybchenko
  2021-07-29  4:13  0%             ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-07-20  8:59 UTC (permalink / raw)
  To: Xueming(Steven) Li, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable

On 7/19/21 3:50 PM, Xueming(Steven) Li wrote:
> 
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Monday, July 19, 2021 8:36 PM
>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang
>> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
>> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
>>
>> On 7/19/21 2:54 PM, Xueming(Steven) Li wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Monday, July 19, 2021 4:46 PM
>>>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde
>>>> <ajit.khaparde@broadcom.com>; Somnath Kotur
>>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong
>>>> Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>;
>>>> Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>;
>>>> Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
>>>> Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
>>>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
>>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>>>> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
>>>>
>>>> On 7/19/21 9:58 AM, Xueming(Steven) Li wrote:
>>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>> Sent: Tuesday, July 13, 2021 12:18 AM
>>>>>> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
>>>>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>;
>>>>>> Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
>>>>>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi
>>>>>> Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>;
>>>>>> Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
>>>>>> Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
>>>>>> Monjalon <thomas@monjalon.net>; Ferruh Yigit
>>>>>> <ferruh.yigit@intel.com>;
>>>>>> Xueming(Steven) Li <xuemingl@nvidia.com>
>>>>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
>>>>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>>>>>> Subject: [PATCH] ethdev: fix representor port ID search by name
>>>>>>
>>>>>> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>>>>>>
>>>>>> Fix representor port ID search by name if the representor itself
>>>>>> does not provide representors info. Getting a list of representors
>>>>>> from a representor does not make sense. Instead, a parent device
>>>> should be used.
>>>>>>
>>>>>> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
>>>>>>
>>>>>> Fixes: df7547a6a2cc ("ethdev: add helper function to get
>>>>>> representor
>>>>>> ID")
>>>>>> Cc: stable@dpdk.org
>>>>>>
>>>>>> Signed-off-by: Viacheslav Galaktionov
>>>>>> <viacheslav.galaktionov@oktetlabs.ru>
>>>>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>> ---
>>>>>> The new field is added into the hole in rte_eth_dev_data structure.
>>>>>> The patch does not change ABI, but extra care is required since ABI
>>>>>> check is disabled for the structure because of the libabigail bug
>>>> [1].
>>>>>>
>>>>>> Potentially it is bad for out-of-tree drivers which implement
>>>>>> representors but do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we care?
>>>>>>
>>>>>> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
>>>>>>
>>>>>> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
>>>>>>
>>>>>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>>>>
>>>> [snip]
>>>>
>>>>>> --- a/lib/ethdev/ethdev_driver.h
>>>>>> +++ b/lib/ethdev/ethdev_driver.h
>>>>>> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
>>>>>>      * For backward compatibility, if no representor info, direct
>>>>>>      * map legacy VF (no controller and pf).
>>>>>>      *
>>>>>> - * @param ethdev
>>>>>> - *  Handle of ethdev port.
>>>>>> + * @param parent_port_id
>>>>>> + *  Port ID of the backing device.
>>>>>>      * @param type
>>>>>>      *  Representor type.
>>>>>>      * @param controller
>>>>>> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
>>>>>>      */
>>>>>>     __rte_internal
>>>>>>     int
>>>>>> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>>>>>> +rte_eth_representor_id_get(uint16_t parent_port_id,
>>>>>
>>>>> It make more sense to get representor info from parent port.
>>>>> Representor is a member of switch domain, PMD owns the information
>>>>> of the representor owner port and info of representors. This change
>>>>> looks better, but not sure whether it valuable to introduce a new
>>>> member to the EAL data structure.
>>>>
>>>> IMHO, it is simply incorrect to return representors info on a
>>>> representor itself. Representor info is an information which representors may be populated using the device.
>>>>
>>>> If above statement is correct, we need a way to get parent device by
>>>> representor to do name to representor ID mapping. I see two options to do it:
>>>>     A. Dedicated field in rte_eth_dev_data as the patch does.
>>>>     B. Dedicated ethdev op (since representor knows parent port ID anyway).
>>>> We have chosen (A) because of simplicity.
>>>
>>> Just recalled that representor port could be probed w/o owner PF, is a force for parent port?
>>
>> I thought that it is impossible and parent port is absolutely required for a representor. Could you provide an example and explain how
>> will it work?
> 
> In case of bonding, PF0 and PF1 become one PF port `bond0`, PCI address is PF0.
> 	-a <PF0>,representor=pf[0-1]vf[0-99] // this is the syntax we proposed.

Is it net/bonding or vendor-specific bonding in HW?
If I remember correctly in the case of net/bonding we have ethdev ports
for bonded devices.

> 
> To be backward compatible, also support the following 2 devargs:
> 	-a <pf0>,representor=[0-99] // probe bond0 and representor on pf0
> 	-a <pf1>,representor=[0-99] // probe representors on pf1.
> If devargs start with PF1 devargs, no owner PF1 created as it disabled in bonding. Can't create bond0(PF0) automatically here as
> device is located by PCI address(PF1) from devargs.

So, I guess the problem is vendor-specific bonding in HW. Anyway
legacy backward compatible representor spec should not require
representors info since it worked before without it. So, it does
not sound like a reason to have representors info on a representor
itself.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/2] drivers: add octeontx crypto adapter framework
  @ 2021-07-20 11:58  3%         ` Akhil Goyal
  2021-07-20 12:14  0%           ` David Marchand
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-07-20 11:58 UTC (permalink / raw)
  To: David Marchand, Thomas Monjalon, Ray Kinsella
  Cc: dev, Pavan Nikhilesh Bhagavatula, Anoob Joseph,
	Abhinandan Gujjar, Ankur Dwivedi, Jerin Jacob Kollanukkaran,
	Aaron Conole, dpdklab, Lincoln Lavoie, Shijith Thotton

 Hi David,
> 
> > >  deps += ['common_octeontx', 'mempool_octeontx', 'bus_vdev',
> > 'net_octeontx']
> > > +deps += ['crypto_octeontx']
> >
> > This extra dependency resulted in disabling the event/octeontx driver
> > in FreeBSD, since crypto/octeontx only builds on Linux.
> > Removing hw support triggers a ABI failure for FreeBSD.
> >
> >
> > - This had been reported by UNH CI:
> > http://mails.dpdk.org/archives/test-report/2021-June/200637.html
> > It seems the result has been ignored but it should have at least
> > raised some discussion.
> >
> This was highlighted to CI ML
> http://patches.dpdk.org/project/dpdk/patch/0686a7c3fb3a22e37378a8545b
> c37bce04f4c391.1624481225.git.sthotton@marvell.com/
> 
> but I think I missed to take the follow up with Brandon and applied the patch
> as it did not look an issue to me as octeon drivers are not currently built on
> FreeBSD.
> Not sure why event driver is getting built there.
> 
> >
> > - I asked UNH to stop testing FreeBSD abi for now, waiting to get the
> > main branch fixed.
> >
> > I don't have the time to look at this, please can you work on it?
> >
> > Several options:
> > * crypto/octeontx is made so that it compiles on FreeBSD,
> > * the abi check is extended to have exceptions per OS,
> > * the FreeBSD abi reference is regenerated at UNH not to have those
> > drivers in it (not sure it is doable),
> 
> Thanks for the suggestions, we are working on it to resolve this as soon as
> possible.
> We may need to add exception in ABI checking so that it does not shout if a
> PMD
> is not compiled.
Can we have below change? Will it work to disable compilation of
event/octeontx2 for FreeBSD? I believe this was done by mistake earlier
as all other octeontx2 drivers are compiled off on platforms other than Linux.

diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
index 96ebb1f2e7..1ebc51f73f 100644
--- a/drivers/event/octeontx2/meson.build
+++ b/drivers/event/octeontx2/meson.build
@@ -2,7 +2,7 @@
 # Copyright(C) 2019 Marvell International Ltd.
 #

-if not dpdk_conf.get('RTE_ARCH_64')
+if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
     build = false
     reason = 'only supported on 64-bit'
     subdir_done()

Or of this does not work, then we would need to add exception in ABI checking.
Any suggestions how to do this?

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/2] drivers: add octeontx crypto adapter framework
  2021-07-20 11:58  3%         ` Akhil Goyal
@ 2021-07-20 12:14  0%           ` David Marchand
  2021-07-21  9:44  3%             ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: David Marchand @ 2021-07-20 12:14 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: Thomas Monjalon, Ray Kinsella, dev, Pavan Nikhilesh Bhagavatula,
	Anoob Joseph, Abhinandan Gujjar, Ankur Dwivedi,
	Jerin Jacob Kollanukkaran, Aaron Conole, dpdklab, Lincoln Lavoie,
	Shijith Thotton

On Tue, Jul 20, 2021 at 1:59 PM Akhil Goyal <gakhil@marvell.com> wrote:
>
>  Hi David,
> >
> > > >  deps += ['common_octeontx', 'mempool_octeontx', 'bus_vdev',
> > > 'net_octeontx']
> > > > +deps += ['crypto_octeontx']
> > >
> > > This extra dependency resulted in disabling the event/octeontx driver
> > > in FreeBSD, since crypto/octeontx only builds on Linux.
> > > Removing hw support triggers a ABI failure for FreeBSD.
> > >
> > >
> > > - This had been reported by UNH CI:
> > > http://mails.dpdk.org/archives/test-report/2021-June/200637.html
> > > It seems the result has been ignored but it should have at least
> > > raised some discussion.
> > >
> > This was highlighted to CI ML
> > http://patches.dpdk.org/project/dpdk/patch/0686a7c3fb3a22e37378a8545b
> > c37bce04f4c391.1624481225.git.sthotton@marvell.com/
> >
> > but I think I missed to take the follow up with Brandon and applied the patch
> > as it did not look an issue to me as octeon drivers are not currently built on
> > FreeBSD.
> > Not sure why event driver is getting built there.
> >
> > >
> > > - I asked UNH to stop testing FreeBSD abi for now, waiting to get the
> > > main branch fixed.
> > >
> > > I don't have the time to look at this, please can you work on it?
> > >
> > > Several options:
> > > * crypto/octeontx is made so that it compiles on FreeBSD,
> > > * the abi check is extended to have exceptions per OS,
> > > * the FreeBSD abi reference is regenerated at UNH not to have those
> > > drivers in it (not sure it is doable),
> >
> > Thanks for the suggestions, we are working on it to resolve this as soon as
> > possible.
> > We may need to add exception in ABI checking so that it does not shout if a
> > PMD
> > is not compiled.
> Can we have below change? Will it work to disable compilation of
> event/octeontx2 for FreeBSD? I believe this was done by mistake earlier
> as all other octeontx2 drivers are compiled off on platforms other than Linux.
>
> diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
> index 96ebb1f2e7..1ebc51f73f 100644
> --- a/drivers/event/octeontx2/meson.build
> +++ b/drivers/event/octeontx2/meson.build
> @@ -2,7 +2,7 @@
>  # Copyright(C) 2019 Marvell International Ltd.
>  #
>
> -if not dpdk_conf.get('RTE_ARCH_64')
> +if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
>      build = false
>      reason = 'only supported on 64-bit'
>      subdir_done()

I did not suggest this possibility.
That's the same as for other octeon drivers, such change has been
deferred to 21.11.
https://patches.dpdk.org/project/dpdk/list/?series=15885



>
> Or of this does not work, then we would need to add exception in ABI checking.
> Any suggestions how to do this?

Sorry, no good idea from me.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] eal: fix argument to rte_bsf32_safe
  2021-07-19 22:00  3%   ` Stephen Hemminger
@ 2021-07-20 13:26  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-07-20 13:26 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: Tyler Retzlaff, dev, anatoly.burakov, david.marchand

20/07/2021 00:00, Stephen Hemminger:
> On Mon, 19 Jul 2021 10:15:34 -0700
> Tyler Retzlaff <roretzla@linux.microsoft.com> wrote:
> 
> > On Tue, Jul 13, 2021 at 01:12:21PM -0700, Stephen Hemminger wrote:
> > > The first argument to rte_bsf32_safe was incorrectly declared as
> > > a 64 bit value. This function only correctly handles on 32 bit values
> > > and the underlying function rte_bsf32 only accepts 32 bit values.
> > > This was introduced when the safe version was added and probably cause
> > > by copy/paste from the 64 bit version.  
> > 
> > there are multiple errors in this family of functions [1] both in usage
> > and signatures. we previously discussed rolling all fixes up into a single
> > patch and announcing an api break.
> > 
> > a doc patch was submitted as per the process documented for breaking api
> > but received no replies [2]
> > 
> > i have a full patch that corrects the whole family if you would like to
> > take it instead. contact me offline if you are interested.
> > 
> > 1. http://mails.dpdk.org/archives/dev/2021-March/201590.html
> > 2. http://mails.dpdk.org/archives/dev/2021-March/201868.html
> > 
> > the change stand-alone is correct so
> > 
> > Acked-By: Tyler Retzlaff <roretzla@linux.microsoft.com>
> 
> Thanks, I think the larger set should go into 21.11 where API/ABI break
> would be ok. My bit was all about fixing the bug where current code
> breaks C++ users.

Shouldn't we have a note in the API changes section of the release notes?




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/2] drivers: add octeontx crypto adapter framework
  2021-07-20 12:14  0%           ` David Marchand
@ 2021-07-21  9:44  3%             ` Thomas Monjalon
  2021-07-21 15:11  4%               ` Brandon Lo
  2021-07-22  7:45  0%               ` Akhil Goyal
  0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2021-07-21  9:44 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: dev, Ray Kinsella, Pavan Nikhilesh Bhagavatula, Anoob Joseph,
	Abhinandan Gujjar, Ankur Dwivedi, Jerin Jacob Kollanukkaran,
	Aaron Conole, dpdklab, Lincoln Lavoie, Shijith Thotton,
	David Marchand

20/07/2021 14:14, David Marchand:
> On Tue, Jul 20, 2021 at 1:59 PM Akhil Goyal <gakhil@marvell.com> wrote:
> >
> >  Hi David,
> > >
> > > > >  deps += ['common_octeontx', 'mempool_octeontx', 'bus_vdev',
> > > > 'net_octeontx']
> > > > > +deps += ['crypto_octeontx']
> > > >
> > > > This extra dependency resulted in disabling the event/octeontx driver
> > > > in FreeBSD, since crypto/octeontx only builds on Linux.
> > > > Removing hw support triggers a ABI failure for FreeBSD.
> > > >
> > > >
> > > > - This had been reported by UNH CI:
> > > > http://mails.dpdk.org/archives/test-report/2021-June/200637.html
> > > > It seems the result has been ignored but it should have at least
> > > > raised some discussion.
> > > >
> > > This was highlighted to CI ML
> > > http://patches.dpdk.org/project/dpdk/patch/0686a7c3fb3a22e37378a8545b
> > > c37bce04f4c391.1624481225.git.sthotton@marvell.com/
> > >
> > > but I think I missed to take the follow up with Brandon and applied the patch
> > > as it did not look an issue to me as octeon drivers are not currently built on
> > > FreeBSD.
> > > Not sure why event driver is getting built there.
> > >
> > > >
> > > > - I asked UNH to stop testing FreeBSD abi for now, waiting to get the
> > > > main branch fixed.
> > > >
> > > > I don't have the time to look at this, please can you work on it?
> > > >
> > > > Several options:
> > > > * crypto/octeontx is made so that it compiles on FreeBSD,
> > > > * the abi check is extended to have exceptions per OS,
> > > > * the FreeBSD abi reference is regenerated at UNH not to have those
> > > > drivers in it (not sure it is doable),
> > >
> > > Thanks for the suggestions, we are working on it to resolve this as soon as
> > > possible.
> > > We may need to add exception in ABI checking so that it does not shout if a
> > > PMD
> > > is not compiled.
> > Can we have below change? Will it work to disable compilation of
> > event/octeontx2 for FreeBSD? I believe this was done by mistake earlier
> > as all other octeontx2 drivers are compiled off on platforms other than Linux.
> >
> > diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
> > index 96ebb1f2e7..1ebc51f73f 100644
> > --- a/drivers/event/octeontx2/meson.build
> > +++ b/drivers/event/octeontx2/meson.build
> > @@ -2,7 +2,7 @@
> >  # Copyright(C) 2019 Marvell International Ltd.
> >  #
> >
> > -if not dpdk_conf.get('RTE_ARCH_64')
> > +if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
> >      build = false
> >      reason = 'only supported on 64-bit'
> >      subdir_done()
> 
> I did not suggest this possibility.
> That's the same as for other octeon drivers, such change has been
> deferred to 21.11.
> https://patches.dpdk.org/project/dpdk/list/?series=15885
> 
> >
> > Or of this does not work, then we would need to add exception in ABI checking.
> > Any suggestions how to do this?
> 
> Sorry, no good idea from me.

We would need to revert the change breaking the ABI test.
But I don't understand why it seems passing in recent CI runs?



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/2] drivers: add octeontx crypto adapter framework
  2021-07-21  9:44  3%             ` Thomas Monjalon
@ 2021-07-21 15:11  4%               ` Brandon Lo
  2021-07-22  7:45  0%               ` Akhil Goyal
  1 sibling, 0 replies; 200+ results
From: Brandon Lo @ 2021-07-21 15:11 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: Akhil Goyal, dev, Ray Kinsella, Pavan Nikhilesh Bhagavatula,
	Anoob Joseph, Abhinandan Gujjar, Ankur Dwivedi,
	Jerin Jacob Kollanukkaran, Aaron Conole, dpdklab, Lincoln Lavoie,
	Shijith Thotton, David Marchand

On Wed, Jul 21, 2021 at 5:44 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 20/07/2021 14:14, David Marchand:
> > On Tue, Jul 20, 2021 at 1:59 PM Akhil Goyal <gakhil@marvell.com> wrote:
> > >
> > >  Hi David,
> > > >
> > > > > >  deps += ['common_octeontx', 'mempool_octeontx', 'bus_vdev',
> > > > > 'net_octeontx']
> > > > > > +deps += ['crypto_octeontx']
> > > > >
> > > > > This extra dependency resulted in disabling the event/octeontx driver
> > > > > in FreeBSD, since crypto/octeontx only builds on Linux.
> > > > > Removing hw support triggers a ABI failure for FreeBSD.
> > > > >
> > > > >
> > > > > - This had been reported by UNH CI:
> > > > > http://mails.dpdk.org/archives/test-report/2021-June/200637.html
> > > > > It seems the result has been ignored but it should have at least
> > > > > raised some discussion.
> > > > >
> > > > This was highlighted to CI ML
> > > > http://patches.dpdk.org/project/dpdk/patch/0686a7c3fb3a22e37378a8545b
> > > > c37bce04f4c391.1624481225.git.sthotton@marvell.com/
> > > >
> > > > but I think I missed to take the follow up with Brandon and applied the patch
> > > > as it did not look an issue to me as octeon drivers are not currently built on
> > > > FreeBSD.
> > > > Not sure why event driver is getting built there.
> > > >
> > > > >
> > > > > - I asked UNH to stop testing FreeBSD abi for now, waiting to get the
> > > > > main branch fixed.
> > > > >
> > > > > I don't have the time to look at this, please can you work on it?
> > > > >
> > > > > Several options:
> > > > > * crypto/octeontx is made so that it compiles on FreeBSD,
> > > > > * the abi check is extended to have exceptions per OS,
> > > > > * the FreeBSD abi reference is regenerated at UNH not to have those
> > > > > drivers in it (not sure it is doable),
> > > >
> > > > Thanks for the suggestions, we are working on it to resolve this as soon as
> > > > possible.
> > > > We may need to add exception in ABI checking so that it does not shout if a
> > > > PMD
> > > > is not compiled.
> > > Can we have below change? Will it work to disable compilation of
> > > event/octeontx2 for FreeBSD? I believe this was done by mistake earlier
> > > as all other octeontx2 drivers are compiled off on platforms other than Linux.
> > >
> > > diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
> > > index 96ebb1f2e7..1ebc51f73f 100644
> > > --- a/drivers/event/octeontx2/meson.build
> > > +++ b/drivers/event/octeontx2/meson.build
> > > @@ -2,7 +2,7 @@
> > >  # Copyright(C) 2019 Marvell International Ltd.
> > >  #
> > >
> > > -if not dpdk_conf.get('RTE_ARCH_64')
> > > +if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
> > >      build = false
> > >      reason = 'only supported on 64-bit'
> > >      subdir_done()
> >
> > I did not suggest this possibility.
> > That's the same as for other octeon drivers, such change has been
> > deferred to 21.11.
> > https://patches.dpdk.org/project/dpdk/list/?series=15885
> >
> > >
> > > Or of this does not work, then we would need to add exception in ABI checking.
> > > Any suggestions how to do this?
> >
> > Sorry, no good idea from me.
>
> We would need to revert the change breaking the ABI test.
> But I don't understand why it seems passing in recent CI runs?

Hi Thomas,

For the UNH lab, FreeBSD 13 ABI tests have been disabled due to a request
made during the community CI meeting on July 15th.

The recent CI ABI runs will show up as passes, but the older runs with
FreeBSD 13 included will keep their recorded failures.

Thanks,
Brandon


--
Brandon Lo
UNH InterOperability Laboratory
21 Madbury Rd, Suite 100, Durham, NH 03824
blo@iol.unh.edu
www.iol.unh.edu

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v2 1/2] drivers: add octeontx crypto adapter framework
  2021-07-21  9:44  3%             ` Thomas Monjalon
  2021-07-21 15:11  4%               ` Brandon Lo
@ 2021-07-22  7:45  0%               ` Akhil Goyal
  2021-07-22  9:06  3%                 ` [dpdk-dev] [PATCH] crypto/octeontx: enable build on non Linux OS Shijith Thotton
  1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-07-22  7:45 UTC (permalink / raw)
  To: Thomas Monjalon, David Marchand
  Cc: dev, Ray Kinsella, Pavan Nikhilesh Bhagavatula, Anoob Joseph,
	Abhinandan Gujjar, Ankur Dwivedi, Jerin Jacob Kollanukkaran,
	Aaron Conole, dpdklab, Lincoln Lavoie, Shijith Thotton

> 20/07/2021 14:14, David Marchand:
> > On Tue, Jul 20, 2021 at 1:59 PM Akhil Goyal <gakhil@marvell.com> wrote:
> > >
> > >  Hi David,
> > > >
> > > > > >  deps += ['common_octeontx', 'mempool_octeontx', 'bus_vdev',
> > > > > 'net_octeontx']
> > > > > > +deps += ['crypto_octeontx']
> > > > >
> > > > > This extra dependency resulted in disabling the event/octeontx driver
> > > > > in FreeBSD, since crypto/octeontx only builds on Linux.
> > > > > Removing hw support triggers a ABI failure for FreeBSD.
> > > > >
> > > > >
> > > > > - This had been reported by UNH CI:
> > > > > https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__mails.dpdk.org_archives_test-2Dreport_2021-
> 2DJune_200637.html&d=DwICAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2
> wl_PRwpZ9TWey3eu68gBzn7DkPwuqhd6WNyo&m=zikYn88P-
> Q3H517Go0NWLsokSeUCheJhQyY-Rh-
> DAWQ&s=v6vmJJNBDxjoA81J4rpuxvgPhR8DCT6qizgAkXauZIY&e=
> > > > > It seems the result has been ignored but it should have at least
> > > > > raised some discussion.
> > > > >
> > > > This was highlighted to CI ML
> > > > https://urldefense.proofpoint.com/v2/url?u=http-
> 3A__patches.dpdk.org_project_dpdk_patch_0686a7c3fb3a22e37378a8545b
> &d=DwICAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl_PRwpZ9TWey3eu6
> 8gBzn7DkPwuqhd6WNyo&m=zikYn88P-Q3H517Go0NWLsokSeUCheJhQyY-
> Rh-DAWQ&s=68Xkwo5J0d3BngYD0gxM0JKIgDzd58pypXyJrprGIgA&e=
> > > > c37bce04f4c391.1624481225.git.sthotton@marvell.com/
> > > >
> > > > but I think I missed to take the follow up with Brandon and applied the
> patch
> > > > as it did not look an issue to me as octeon drivers are not currently built
> on
> > > > FreeBSD.
> > > > Not sure why event driver is getting built there.
> > > >
> > > > >
> > > > > - I asked UNH to stop testing FreeBSD abi for now, waiting to get the
> > > > > main branch fixed.
> > > > >
> > > > > I don't have the time to look at this, please can you work on it?
> > > > >
> > > > > Several options:
> > > > > * crypto/octeontx is made so that it compiles on FreeBSD,
> > > > > * the abi check is extended to have exceptions per OS,
> > > > > * the FreeBSD abi reference is regenerated at UNH not to have those
> > > > > drivers in it (not sure it is doable),
> > > >
> > > > Thanks for the suggestions, we are working on it to resolve this as soon
> as
> > > > possible.
> > > > We may need to add exception in ABI checking so that it does not shout
> if a
> > > > PMD
> > > > is not compiled.
> > > Can we have below change? Will it work to disable compilation of
> > > event/octeontx2 for FreeBSD? I believe this was done by mistake earlier
> > > as all other octeontx2 drivers are compiled off on platforms other than
> Linux.
> > >
> > > diff --git a/drivers/event/octeontx2/meson.build
> b/drivers/event/octeontx2/meson.build
> > > index 96ebb1f2e7..1ebc51f73f 100644
> > > --- a/drivers/event/octeontx2/meson.build
> > > +++ b/drivers/event/octeontx2/meson.build
> > > @@ -2,7 +2,7 @@
> > >  # Copyright(C) 2019 Marvell International Ltd.
> > >  #
> > >
> > > -if not dpdk_conf.get('RTE_ARCH_64')
> > > +if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
> > >      build = false
> > >      reason = 'only supported on 64-bit'
> > >      subdir_done()
> >
> > I did not suggest this possibility.
> > That's the same as for other octeon drivers, such change has been
> > deferred to 21.11.
> > https://urldefense.proofpoint.com/v2/url?u=https-
> 3A__patches.dpdk.org_project_dpdk_list_-3Fseries-
> 3D15885&d=DwICAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=DnL7Si2wl_PRwpZ9T
> Wey3eu68gBzn7DkPwuqhd6WNyo&m=zikYn88P-
> Q3H517Go0NWLsokSeUCheJhQyY-Rh-
> DAWQ&s=A5fHouoeBcH2sL_xt5dtzRwfA8Fq__eBUYc-J9ANBIg&e=
> >
> > >
> > > Or of this does not work, then we would need to add exception in ABI
> checking.
> > > Any suggestions how to do this?
> >
> > Sorry, no good idea from me.
> 
> We would need to revert the change breaking the ABI test.
> But I don't understand why it seems passing in recent CI runs?
> 
It is passing because FreeBSD is currently skipped. Right David?
BTW, no need to revert, we would be sending a patch to enable compilation
of crypto/octeontx


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] crypto/octeontx: enable build on non Linux OS
  2021-07-22  7:45  0%               ` Akhil Goyal
@ 2021-07-22  9:06  3%                 ` Shijith Thotton
  2021-07-22  9:17  0%                   ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Shijith Thotton @ 2021-07-22  9:06 UTC (permalink / raw)
  To: gakhil, thomas
  Cc: abhinandan.gujjar, aconole, adwivedi, anoobj, david.marchand,
	dev, dpdklab, jerinj, lylavoie, mdr, pbhagavatula, sthotton

Enabled build of Octeontx crypto PMD on non linux OS. Other Octeontx
PMDs are enabled already.

This is to avoid ABI test failure on an OS once we add dependency
between a driver which is built to another which is not.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
---
 drivers/crypto/octeontx/meson.build | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/crypto/octeontx/meson.build b/drivers/crypto/octeontx/meson.build
index 3ae6729e8f..244b16230e 100644
--- a/drivers/crypto/octeontx/meson.build
+++ b/drivers/crypto/octeontx/meson.build
@@ -1,9 +1,5 @@
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2018 Cavium, Inc
-if not is_linux
-    build = false
-    reason = 'only supported on Linux'
-endif
 
 deps += ['bus_pci']
 deps += ['bus_vdev']
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] crypto/octeontx: enable build on non Linux OS
  2021-07-22  9:06  3%                 ` [dpdk-dev] [PATCH] crypto/octeontx: enable build on non Linux OS Shijith Thotton
@ 2021-07-22  9:17  0%                   ` Akhil Goyal
  2021-07-22 19:06  0%                     ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-07-22  9:17 UTC (permalink / raw)
  To: Shijith Thotton, thomas, david.marchand
  Cc: abhinandan.gujjar, aconole, Ankur Dwivedi, Anoob Joseph, dev,
	dpdklab, Jerin Jacob Kollanukkaran, lylavoie, mdr,
	Pavan Nikhilesh Bhagavatula, Shijith Thotton

> Enabled build of Octeontx crypto PMD on non linux OS. Other Octeontx
> PMDs are enabled already.
> 
> This is to avoid ABI test failure on an OS once we add dependency
> between a driver which is built to another which is not.

Fixes: 8dc6c2f12ecf ("crypto/octeontx: add crypto adapter framework")
> 

Reported-by: David Marchand <david.marchand@redhat.com>

> Signed-off-by: Shijith Thotton <sthotton@marvell.com>

Acked-by: Akhil Goyal <gakhil@marvell.com>

Thomas/David: please pick this patch directly on main to fix build on CI for FreeBSD.


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-stable] [PATCH v4] app/testpmd: fix testpmd doesn't show RSS hash offload
  2021-07-19 16:18  0%           ` Ferruh Yigit
@ 2021-07-22 11:03  0%             ` Andrew Rybchenko
  2021-08-09  8:53  0%               ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-07-22 11:03 UTC (permalink / raw)
  To: Ferruh Yigit, Wang, Jie1X, Li, Xiaoyun, dev; +Cc: stable

On 7/19/21 7:18 PM, Ferruh Yigit wrote:
> On 7/19/2021 10:55 AM, Wang, Jie1X wrote:
>>
>>
>>> -----Original Message-----
>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>> Sent: Friday, July 16, 2021 4:52 PM
>>> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Wang, Jie1X <jie1x.wang@intel.com>;
>>> dev@dpdk.org
>>> Cc: andrew.rybchenko@oktetlabs.ru; stable@dpdk.org
>>> Subject: Re: [dpdk-stable] [PATCH v4] app/testpmd: fix testpmd doesn't show
>>> RSS hash offload
>>>
>>> On 7/16/2021 9:30 AM, Li, Xiaoyun wrote:
>>>>> -----Original Message-----
>>>>> From: stable <stable-bounces@dpdk.org> On Behalf Of Li, Xiaoyun
>>>>> Sent: Thursday, July 15, 2021 12:54
>>>>> To: Wang, Jie1X <jie1x.wang@intel.com>; dev@dpdk.org
>>>>> Cc: andrew.rybchenko@oktetlabs.ru; stable@dpdk.org
>>>>> Subject: Re: [dpdk-stable] [PATCH v4] app/testpmd: fix testpmd
>>>>> doesn't show RSS hash offload
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Wang, Jie1X <jie1x.wang@intel.com>
>>>>>> Sent: Thursday, July 15, 2021 19:57
>>>>>> To: dev@dpdk.org
>>>>>> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>;
>>>>>> andrew.rybchenko@oktetlabs.ru; Wang, Jie1X <jie1x.wang@intel.com>;
>>>>>> stable@dpdk.org
>>>>>> Subject: [PATCH v4] app/testpmd: fix testpmd doesn't show RSS hash
>>>>>> offload
>>>>>>
>>>>>> The driver may change offloads info into dev->data->dev_conf in
>>>>>> dev_configure which may cause port->dev_conf and port->rx_conf
>>>>>> contain
>>>>> outdated values.
>>>>>>
>>>>>> This patch updates the offloads info if it changes to fix this issue.
>>>>>>
>>>>>> Fixes: ce8d561418d4 ("app/testpmd: add port configuration settings")
>>>>>> Cc: stable@dpdk.org
>>>>>>
>>>>>> Signed-off-by: Jie Wang <jie1x.wang@intel.com>
>>>>>> ---
>>>>>> v4: delete the whitespace at the end of the line.
>>>>>> v3:
>>>>>>   - check and update the "offloads" of "port->dev_conf.rx/txmode".
>>>>>>   - update the commit log.
>>>>>> v2: copy "rx/txmode.offloads", instead of copying the entire struct
>>>>>> "dev->data-
>>>>>>> dev_conf.rx/txmode".
>>>>>> ---
>>>>>>   app/test-pmd/testpmd.c | 27 +++++++++++++++++++++++++++
>>>>>>   1 file changed, 27 insertions(+)
>>>>>
>>>>> Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
>>>>
>>>> Although I gave my ack, app shouldn't touch rte_eth_devices which this patch
>>> does. Usually, testpmd should only call function like
>>> eth_dev_info_get_print_err().
>>>> But dev_info doesn't contain the info dev->data->dev_conf which the driver
>>> modifies.
>>>>
>>>> Probably we need a better fix.
>>>>
>>>
>>> Agree, an application accessing directly to 'rte_eth_devices' is sign of something
>>> missing/wrong.
>>>
>>> In this case there is no way for application to know what is the configured
>>> offload settings per port and queue. Which is missing part I think.
>>>
>>> As you said normally we get data from PMD mainly via 'rte_eth_dev_info_get()',
>>> which is an overloaded function, it provides many different things, like driver
>>> default values, limitations, current config/status, capabilities etc...
>>>
>>> So I think we can do a few things:
>>> 1) Add current offload configuration to 'rte_eth_dev_info_get()', so application
>>> can get it and use it.
>>> The advantage is this API already called many places, many times, so there is a
>>> big chance that application already have this information when it needs.
>>> Disadvantage is, as mentioned above the API already big and messy, making it
>>> bigger makes more error prone and makes easier to break ABI.
>>>
>> I prefer to choose the 1st suggestion.
>>
>> Normally PMD gets data via 'rte_eth_dev_info_get()'. When we add offloads configuration
>> to it, we can get offloads as same as getting other info.
>>
> 
> Most probably it is easier to implement 1), I see your point but as said before
> I think 'rte_eth_dev_info_get()' is already messy and I am worried to make it
> even bigger.

IMHO, (1) is not an option.

> I prefer option 2).

I'm not sure that API function for each config parameter is an option as
well. We should find a balance. May be I'd add something like
rte_eth_dev_get_conf(uint16_t port_id, const struct rte_eth_conf **conf)
which returns a pointer to up-to-date configuration. I.e. option (3).

The tricky part here is to ensure that all specific API which modifies
various bits of the configuration updates dev_conf.

> 
> @Thomas, @Andrew, what do you think?
> 
> 
>>> 2) Add a new API to get configured offload information, so a specific API for it.
>>>
>>> 3) Get a more generic API to get configured config (dev_conf) which will cover
>>> offloads too.
>>> Disadvantage can be leaking out too many internal config to user unintentionally.

I don't understand it. dev_conf is provided by user on
rte_eth_dev_configure().

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] crypto/octeontx: enable build on non Linux OS
  2021-07-22  9:17  0%                   ` Akhil Goyal
@ 2021-07-22 19:06  0%                     ` Thomas Monjalon
  2021-07-22 19:08  3%                       ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-07-22 19:06 UTC (permalink / raw)
  To: Shijith Thotton, Akhil Goyal
  Cc: david.marchand, dev, abhinandan.gujjar, aconole, Ankur Dwivedi,
	Anoob Joseph, dpdklab, Jerin Jacob Kollanukkaran, lylavoie, mdr,
	Pavan Nikhilesh Bhagavatula

22/07/2021 11:17, Akhil Goyal:
> > Enabled build of Octeontx crypto PMD on non linux OS. Other Octeontx
> > PMDs are enabled already.
> > 
> > This is to avoid ABI test failure on an OS once we add dependency
> > between a driver which is built to another which is not.
> 
> Fixes: 8dc6c2f12ecf ("crypto/octeontx: add crypto adapter framework")
> > 
> 
> Reported-by: David Marchand <david.marchand@redhat.com>
> 
> > Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> 
> Acked-by: Akhil Goyal <gakhil@marvell.com>
> 
> Thomas/David: please pick this patch directly on main to fix build on CI for FreeBSD.

Applied, thanks.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] crypto/octeontx: enable build on non Linux OS
  2021-07-22 19:06  0%                     ` Thomas Monjalon
@ 2021-07-22 19:08  3%                       ` Thomas Monjalon
  2021-07-22 20:20  3%                         ` Brandon Lo
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-07-22 19:08 UTC (permalink / raw)
  To: dpdklab, lylavoie, Brandon Lo
  Cc: Shijith Thotton, Akhil Goyal, david.marchand, dev, aconole, ci

22/07/2021 21:06, Thomas Monjalon:
> 22/07/2021 11:17, Akhil Goyal:
> > > Enabled build of Octeontx crypto PMD on non linux OS. Other Octeontx
> > > PMDs are enabled already.
> > > 
> > > This is to avoid ABI test failure on an OS once we add dependency
> > > between a driver which is built to another which is not.
> > 
> > Fixes: 8dc6c2f12ecf ("crypto/octeontx: add crypto adapter framework")
> > > 
> > 
> > Reported-by: David Marchand <david.marchand@redhat.com>
> > 
> > > Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> > 
> > Acked-by: Akhil Goyal <gakhil@marvell.com>
> > 
> > Thomas/David: please pick this patch directly on main to fix build on CI for FreeBSD.
> 
> Applied, thanks.

Please could you re-test the ABI on FreeBSD
and re-enable in the CI if the test is passing?

Thank you



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] crypto/octeontx: enable build on non Linux OS
  2021-07-22 19:08  3%                       ` Thomas Monjalon
@ 2021-07-22 20:20  3%                         ` Brandon Lo
  2021-07-22 20:32  0%                           ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Brandon Lo @ 2021-07-22 20:20 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dpdklab, lylavoie, Shijith Thotton, Akhil Goyal, david.marchand,
	dev, aconole, ci

On Thu, Jul 22, 2021 at 3:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 22/07/2021 21:06, Thomas Monjalon:
> > 22/07/2021 11:17, Akhil Goyal:
> > > > Enabled build of Octeontx crypto PMD on non linux OS. Other Octeontx
> > > > PMDs are enabled already.
> > > >
> > > > This is to avoid ABI test failure on an OS once we add dependency
> > > > between a driver which is built to another which is not.
> > >
> > > Fixes: 8dc6c2f12ecf ("crypto/octeontx: add crypto adapter framework")
> > > >
> > >
> > > Reported-by: David Marchand <david.marchand@redhat.com>
> > >
> > > > Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> > >
> > > Acked-by: Akhil Goyal <gakhil@marvell.com>
> > >
> > > Thomas/David: please pick this patch directly on main to fix build on CI for FreeBSD.
> >
> > Applied, thanks.
>
> Please could you re-test the ABI on FreeBSD
> and re-enable in the CI if the test is passing?
>
> Thank you

I ran a couple test runs on FreeBSD 13 to ensure that the patch
compiles successfully, and I enabled reporting.
FreeBSD 13 should start to appear in the ABI test results of newer
tarballs with the patch.

Thanks,
Brandon


--
Brandon Lo
UNH InterOperability Laboratory
21 Madbury Rd, Suite 100, Durham, NH 03824
blo@iol.unh.edu
www.iol.unh.edu

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] DPDK Release Status Meeting 22/07/2021
@ 2021-07-22 20:22  3% Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-07-22 20:22 UTC (permalink / raw)
  To: dev; +Cc: john.mcnamara, ferruh.yigit, david.marchand, Christian Ehrhardt

Release Dates
-------------

* v21.08
  - Proposal/V1:    Wednesday,  2 June (completed)
  - rc1:            Saturday,  10 July (completed)
  - rc2:            Friday,    23 July
  - rc3:            Thursday,  29 July
  - rc4:            Wednesday,  4 August
  - Release:        Friday,     6 August

Subtrees
--------

* next-net
  - Bug with libatomic in clang, fixed today.

* next-crypto
  - Pulled yesterday.
  - Only deprecation notices left for this release.
  - ABI check on FreeBSD: fixed today.

* next-eventdev
  - Few patches for -rc3.

* next-virtio
  - Pulled yesterday.
  - One more series to look at (was rejected later).
  - Change on async experimental code - candidate for -rc3

* next-net-brcm
  - No update.

* next-net-intel
  - No update.

* next-net-mlx
  - Integration in progress

* next-net-mrvl
  - Few patches for -rc3.

LTS
---

DPDK 19.11.9 released on Monday by Christian.

Call for help for 19.11.x to fix issues with new toolchains, kernels, etc.



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] crypto/octeontx: enable build on non Linux OS
  2021-07-22 20:20  3%                         ` Brandon Lo
@ 2021-07-22 20:32  0%                           ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-07-22 20:32 UTC (permalink / raw)
  To: Brandon Lo
  Cc: dpdklab, lylavoie, Shijith Thotton, Akhil Goyal, david.marchand,
	dev, aconole, ci

22/07/2021 22:20, Brandon Lo:
> On Thu, Jul 22, 2021 at 3:08 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 22/07/2021 21:06, Thomas Monjalon:
> > > 22/07/2021 11:17, Akhil Goyal:
> > > > > Enabled build of Octeontx crypto PMD on non linux OS. Other Octeontx
> > > > > PMDs are enabled already.
> > > > >
> > > > > This is to avoid ABI test failure on an OS once we add dependency
> > > > > between a driver which is built to another which is not.
> > > >
> > > > Fixes: 8dc6c2f12ecf ("crypto/octeontx: add crypto adapter framework")
> > > > >
> > > >
> > > > Reported-by: David Marchand <david.marchand@redhat.com>
> > > >
> > > > > Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> > > >
> > > > Acked-by: Akhil Goyal <gakhil@marvell.com>
> > > >
> > > > Thomas/David: please pick this patch directly on main to fix build on CI for FreeBSD.
> > >
> > > Applied, thanks.
> >
> > Please could you re-test the ABI on FreeBSD
> > and re-enable in the CI if the test is passing?
> >
> > Thank you
> 
> I ran a couple test runs on FreeBSD 13 to ensure that the patch
> compiles successfully, and I enabled reporting.
> FreeBSD 13 should start to appear in the ABI test results of newer
> tarballs with the patch.

Thanks a lot Brandon, well managed.




^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2] eal: fix argument to rte_bsf32_safe
    2021-07-19 17:15  0% ` Tyler Retzlaff
@ 2021-07-23  0:52  8% ` Stephen Hemminger
  2021-07-23 15:45  8% ` [dpdk-dev] [PATCH v3] " Stephen Hemminger
  2 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-07-23  0:52 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, anatoly.burakov, Tyler Retzlaff

The first argument to rte_bsf32_safe was incorrectly declared as
a 64 bit value. The code only works on 32 bit values and the underlying
function rte_bsf32 only accepts 32 bit values. This was a mistake
introduced when the safe version was added and probaly cause
by copy/paste from the 64 bit version.

The bug passed silently under the radar until some other code was
built with -Wall and -Wextra in C++ and C++ complains about the
missing cast.

Yes, this is a API signature change, but the original code was wrong.
It is an inline so not an ABI change.

Fixes: 4e261f551986 ("eal: add 64-bit bsf and 32-bit safe bsf functions")
Cc: anatoly.burakov@intel.com
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-By: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
v2 - add suggested release note

 doc/guides/rel_notes/release_21_08.rst | 4 ++++
 lib/eal/include/rte_common.h           | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index e2c5ccbf7d90..148405891fcb 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -196,6 +196,10 @@ API Changes
   to be thread safe; all Rx queues affected by the API will now need to be
   stopped before making any changes to the power management scheme.
 
+* eal: ``rte_bsf32_safe`` now takes a 32 bit value for its first
+  argument. This fixes warnings about loss of precision when used
+  with some compilers settings.
+
 
 ABI Changes
 -----------
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index d5a32c66a5fe..99eb5f1820ae 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -623,7 +623,7 @@ rte_bsf32(uint32_t v)
  *     Returns 0 if ``v`` was 0, otherwise returns 1.
  */
 static inline int
-rte_bsf32_safe(uint64_t v, uint32_t *pos)
+rte_bsf32_safe(uint32_t v, uint32_t *pos)
 {
 	if (v == 0)
 		return 0;
-- 
2.30.2


^ permalink raw reply	[relevance 8%]

* Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
  @ 2021-07-23  7:39  3% ` Xia, Chenbo
  2021-07-23 12:46  3%   ` Ferruh Yigit
  2021-07-27 10:58  0% ` Ananyev, Konstantin
  1 sibling, 1 reply; 200+ results
From: Xia, Chenbo @ 2021-07-23  7:39 UTC (permalink / raw)
  To: dev, thomas; +Cc: mdr, nhorman, david.marchand, Yigit, Ferruh

Hi,

A gentle ping for comments..

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
> Sent: Tuesday, June 1, 2021 4:42 PM
> To: dev@dpdk.org; thomas@monjalon.net
> Cc: mdr@ashroe.eu; nhorman@tuxdriver.com
> Subject: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
> 
> All ABIs in PCI bus driver, which are defined in rte_buc_pci.h,
> will be removed and the header will be made internal.
> 
> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index 9584d6bfd7..b01f46c62e 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -147,3 +147,8 @@ Deprecation Notices
>  * cmdline: ``cmdline`` structure will be made opaque to hide platform-
> specific
>    content. On Linux and FreeBSD, supported prior to DPDK 20.11,
>    original structure will be kept until DPDK 21.11.
> +
> +* pci: To reduce unnecessary ABIs exposed by DPDK bus driver, "rte_bus_pci.h"
> +  will be made internal in 21.11 and macros/data structures/functions defined
> +  in the header will not be considered as ABI anymore. This change is
> inspired
> +  by the RFC https://patchwork.dpdk.org/project/dpdk/list/?series=17176.

I see there's some ABI improvement work on-going and I think it could be part of
the work. If it makes sense to you, I'd like some ACKs.

Thanks,
Chenbo

> --
> 2.17.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v2] doc: update atomic operation deprecation
    2021-07-17 18:47  0% ` Honnappa Nagarahalli
@ 2021-07-23  9:49  4% ` Joyce Kong
  1 sibling, 0 replies; 200+ results
From: Joyce Kong @ 2021-07-23  9:49 UTC (permalink / raw)
  To: thomas, stephen, honnappa.nagarahalli, ruifeng.wang, mdr; +Cc: dev, nd, stable

Update the incorrect description about atomic operations
with provided wrappers in deprecation doc[1].

[1]https://mails.dpdk.org/archives/dev/2021-July/213333.html

Fixes: 7518c5c4ae6a ("doc: announce adoption of C11 atomic operations semantics")
Cc: stable@dpdk.org

Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 doc/guides/rel_notes/deprecation.rst | 18 ++++++++++--------
 1 file changed, 10 insertions(+), 8 deletions(-)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 9584d6bfd7..a4f350fa09 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -19,16 +19,18 @@ Deprecation Notices
 
 * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
   not allow for writing optimized code for all the CPU architectures supported
-  in DPDK. DPDK will adopt C11 atomic operations semantics and provide wrappers
-  using C11 atomic built-ins. These wrappers must be used for patches that
-  need to be merged in 20.08 onwards. This change will not introduce any
-  performance degradation.
+  in DPDK. DPDK has adopted the atomic operations from
+  https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html. These
+  operations must be used for patches that need to be merged in 20.08 onwards.
+  This change will not introduce any performance degradation.
 
 * rte_smp_*mb: These APIs provide full barrier functionality. However, many
-  use cases do not require full barriers. To support such use cases, DPDK will
-  adopt C11 barrier semantics and provide wrappers using C11 atomic built-ins.
-  These wrappers must be used for patches that need to be merged in 20.08
-  onwards. This change will not introduce any performance degradation.
+  use cases do not require full barriers. To support such use cases, DPDK has
+  adopted atomic operations from
+  https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-Builtins.html. These
+  operations and a new wrapper ``rte_atomic_thread_fence`` instead of
+  ``__atomic_thread_fence`` must be used for patches that need to be merged in
+  20.08 onwards. This change will not introduce any performance degradation.
 
 * lib: will fix extending some enum/define breaking the ABI. There are multiple
   samples in DPDK that enum/define terminated with a ``.*MAX.*`` value which is
-- 
2.17.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
  2021-07-23  7:39  3% ` Xia, Chenbo
@ 2021-07-23 12:46  3%   ` Ferruh Yigit
  2021-07-26  5:56  0%     ` Xia, Chenbo
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-07-23 12:46 UTC (permalink / raw)
  To: Xia, Chenbo, dev, thomas; +Cc: mdr, nhorman, david.marchand

On 7/23/2021 8:39 AM, Xia, Chenbo wrote:
> Hi,
> 
> A gentle ping for comments..
> 
>> -----Original Message-----
>> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
>> Sent: Tuesday, June 1, 2021 4:42 PM
>> To: dev@dpdk.org; thomas@monjalon.net
>> Cc: mdr@ashroe.eu; nhorman@tuxdriver.com
>> Subject: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
>>
>> All ABIs in PCI bus driver, which are defined in rte_buc_pci.h,
>> will be removed and the header will be made internal.
>>
>> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
>> ---
>>  doc/guides/rel_notes/deprecation.rst | 5 +++++
>>  1 file changed, 5 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst
>> b/doc/guides/rel_notes/deprecation.rst
>> index 9584d6bfd7..b01f46c62e 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -147,3 +147,8 @@ Deprecation Notices
>>  * cmdline: ``cmdline`` structure will be made opaque to hide platform-
>> specific
>>    content. On Linux and FreeBSD, supported prior to DPDK 20.11,
>>    original structure will be kept until DPDK 21.11.
>> +
>> +* pci: To reduce unnecessary ABIs exposed by DPDK bus driver, "rte_bus_pci.h"
>> +  will be made internal in 21.11 and macros/data structures/functions defined
>> +  in the header will not be considered as ABI anymore. This change is
>> inspired
>> +  by the RFC https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
> 
> I see there's some ABI improvement work on-going and I think it could be part of
> the work. If it makes sense to you, I'd like some ACKs.
> 

Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>

I am for reducing the public ABI as much as possible. How big will the change
be? Is the 'rte_bus_pci.h' used other than './drivers/bus/pci/'?

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v3] eal: fix argument to rte_bsf32_safe
    2021-07-19 17:15  0% ` Tyler Retzlaff
  2021-07-23  0:52  8% ` [dpdk-dev] [PATCH v2] " Stephen Hemminger
@ 2021-07-23 15:45  8% ` Stephen Hemminger
  2021-07-24  7:58  0%   ` Thomas Monjalon
  2 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-07-23 15:45 UTC (permalink / raw)
  To: dev; +Cc: Stephen Hemminger, anatoly.burakov, Tyler Retzlaff

The first argument to rte_bsf32_safe was incorrectly declared as
a 64 bit value. The code only works on 32 bit values and the underlying
function rte_bsf32 only accepts 32 bit values. This was a mistake
introduced when the safe version was added and probably cause
by copy/paste from the 64 bit version.

The bug passed silently under the radar until some other code was
built with -Wall and -Wextra in C++ and C++ complains about the
missing cast.

Yes, this is a API signature change, but the original code was wrong.
It is an inline so not an ABI change.

Fixes: 4e261f551986 ("eal: add 64-bit bsf and 32-bit safe bsf functions")
Cc: anatoly.burakov@intel.com
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
v3 - reword commit description for checkpatch

 doc/guides/rel_notes/release_21_08.rst | 4 ++++
 lib/eal/include/rte_common.h           | 2 +-
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index e2c5ccbf7d90..148405891fcb 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -196,6 +196,10 @@ API Changes
   to be thread safe; all Rx queues affected by the API will now need to be
   stopped before making any changes to the power management scheme.
 
+* eal: ``rte_bsf32_safe`` now takes a 32 bit value for its first
+  argument. This fixes warnings about loss of precision when used
+  with some compilers settings.
+
 
 ABI Changes
 -----------
diff --git a/lib/eal/include/rte_common.h b/lib/eal/include/rte_common.h
index d5a32c66a5fe..99eb5f1820ae 100644
--- a/lib/eal/include/rte_common.h
+++ b/lib/eal/include/rte_common.h
@@ -623,7 +623,7 @@ rte_bsf32(uint32_t v)
  *     Returns 0 if ``v`` was 0, otherwise returns 1.
  */
 static inline int
-rte_bsf32_safe(uint64_t v, uint32_t *pos)
+rte_bsf32_safe(uint32_t v, uint32_t *pos)
 {
 	if (v == 0)
 		return 0;
-- 
2.30.2


^ permalink raw reply	[relevance 8%]

* Re: [dpdk-dev] [PATCH] devtools: test different build types
  @ 2021-07-23 20:26  0%   ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-07-23 20:26 UTC (permalink / raw)
  To: David Marchand, Thomas Monjalon; +Cc: dev, Bruce Richardson

On 5/21/21 6:03 PM, David Marchand wrote:
> On Mon, Apr 12, 2021 at 11:54 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>> @@ -213,9 +218,10 @@ for c in gcc clang ; do
>>                          abicheck=ABI
> 
> init of buildtype var is missing here.

+1

> Rest lgtm.
> 
>>                  else
>>                          abicheck=skipABI # save time and disk space
>> +                       buildtype='--buildtype=minsize'
>>                  fi
>>                  export CC="$CCACHE $c"
>> -               build build-$c-$s $c $abicheck --default-library=$s
>> +               build build-$c-$s $c $abicheck $buildtype --default-library=$s
>>                  unset CC
>>          done
>>   done
> 
> 

with review notes applied:

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] eal: fix argument to rte_bsf32_safe
  2021-07-23 15:45  8% ` [dpdk-dev] [PATCH v3] " Stephen Hemminger
@ 2021-07-24  7:58  0%   ` Thomas Monjalon
  2021-07-24 23:50  0%     ` Stephen Hemminger
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-07-24  7:58 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dev, anatoly.burakov, Tyler Retzlaff

23/07/2021 17:45, Stephen Hemminger:
> The first argument to rte_bsf32_safe was incorrectly declared as
> a 64 bit value. The code only works on 32 bit values and the underlying
> function rte_bsf32 only accepts 32 bit values. This was a mistake
> introduced when the safe version was added and probably cause
> by copy/paste from the 64 bit version.
> 
> The bug passed silently under the radar until some other code was
> built with -Wall and -Wextra in C++ and C++ complains about the
> missing cast.
> 
> Yes, this is a API signature change, but the original code was wrong.
> It is an inline so not an ABI change.
> 
> Fixes: 4e261f551986 ("eal: add 64-bit bsf and 32-bit safe bsf functions")
> Cc: anatoly.burakov@intel.com
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>

+Cc: stable@dpdk.org

Applied, thanks.

I think these functions lack a reference to the name Bit Scan Forward.





^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v3] eal: fix argument to rte_bsf32_safe
  2021-07-24  7:58  0%   ` Thomas Monjalon
@ 2021-07-24 23:50  0%     ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-07-24 23:50 UTC (permalink / raw)
  To: Thomas Monjalon; +Cc: dev, anatoly.burakov, Tyler Retzlaff

On Sat, 24 Jul 2021 09:58:44 +0200
Thomas Monjalon <thomas@monjalon.net> wrote:

> 23/07/2021 17:45, Stephen Hemminger:
> > The first argument to rte_bsf32_safe was incorrectly declared as
> > a 64 bit value. The code only works on 32 bit values and the underlying
> > function rte_bsf32 only accepts 32 bit values. This was a mistake
> > introduced when the safe version was added and probably cause
> > by copy/paste from the 64 bit version.
> > 
> > The bug passed silently under the radar until some other code was
> > built with -Wall and -Wextra in C++ and C++ complains about the
> > missing cast.
> > 
> > Yes, this is a API signature change, but the original code was wrong.
> > It is an inline so not an ABI change.
> > 
> > Fixes: 4e261f551986 ("eal: add 64-bit bsf and 32-bit safe bsf functions")
> > Cc: anatoly.burakov@intel.com
> > Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> > Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>  
> 
> +Cc: stable@dpdk.org
> 
> Applied, thanks.
> 
> I think these functions lack a reference to the name Bit Scan Forward.
> 
> 
> 
> 

Tyler wanted to fix a bunch more stuff in these for 21.11 where it will
be a bigger API change.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
  2021-07-23 12:46  3%   ` Ferruh Yigit
@ 2021-07-26  5:56  0%     ` Xia, Chenbo
  2021-07-27  8:44  0%       ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Xia, Chenbo @ 2021-07-26  5:56 UTC (permalink / raw)
  To: Yigit, Ferruh, dev, thomas; +Cc: mdr, nhorman, david.marchand

Hi, Ferruh

> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Friday, July 23, 2021 8:47 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>; dev@dpdk.org; thomas@monjalon.net
> Cc: mdr@ashroe.eu; nhorman@tuxdriver.com; david.marchand@redhat.com
> Subject: Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus
> driver
> 
> On 7/23/2021 8:39 AM, Xia, Chenbo wrote:
> > Hi,
> >
> > A gentle ping for comments..
> >
> >> -----Original Message-----
> >> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
> >> Sent: Tuesday, June 1, 2021 4:42 PM
> >> To: dev@dpdk.org; thomas@monjalon.net
> >> Cc: mdr@ashroe.eu; nhorman@tuxdriver.com
> >> Subject: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus
> driver
> >>
> >> All ABIs in PCI bus driver, which are defined in rte_buc_pci.h,
> >> will be removed and the header will be made internal.
> >>
> >> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
> >> ---
> >>  doc/guides/rel_notes/deprecation.rst | 5 +++++
> >>  1 file changed, 5 insertions(+)
> >>
> >> diff --git a/doc/guides/rel_notes/deprecation.rst
> >> b/doc/guides/rel_notes/deprecation.rst
> >> index 9584d6bfd7..b01f46c62e 100644
> >> --- a/doc/guides/rel_notes/deprecation.rst
> >> +++ b/doc/guides/rel_notes/deprecation.rst
> >> @@ -147,3 +147,8 @@ Deprecation Notices
> >>  * cmdline: ``cmdline`` structure will be made opaque to hide platform-
> >> specific
> >>    content. On Linux and FreeBSD, supported prior to DPDK 20.11,
> >>    original structure will be kept until DPDK 21.11.
> >> +
> >> +* pci: To reduce unnecessary ABIs exposed by DPDK bus driver,
> "rte_bus_pci.h"
> >> +  will be made internal in 21.11 and macros/data structures/functions
> defined
> >> +  in the header will not be considered as ABI anymore. This change is
> >> inspired
> >> +  by the RFC
> https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
> >
> > I see there's some ABI improvement work on-going and I think it could be
> part of
> > the work. If it makes sense to you, I'd like some ACKs.
> >
> 
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
> 
> I am for reducing the public ABI as much as possible. How big will the
> change
> be? Is the 'rte_bus_pci.h' used other than './drivers/bus/pci/'?

I don't see big change here. And I am not sure if I understand your second
question. The rte_bus_pci.h will still be used by drivers (maybe remove the
rte prefix and change the file name).

Thanks,
Chenbo

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
  2021-07-26  5:56  0%     ` Xia, Chenbo
@ 2021-07-27  8:44  0%       ` Bruce Richardson
  2021-07-28 15:32  0%         ` Andrew Rybchenko
  2021-07-31 20:44  0%         ` Thomas Monjalon
  0 siblings, 2 replies; 200+ results
From: Bruce Richardson @ 2021-07-27  8:44 UTC (permalink / raw)
  To: Xia, Chenbo; +Cc: Yigit, Ferruh, dev, thomas, mdr, nhorman, david.marchand

On Mon, Jul 26, 2021 at 05:56:17AM +0000, Xia, Chenbo wrote:
> Hi, Ferruh
> 
> > -----Original Message-----
> > From: Yigit, Ferruh <ferruh.yigit@intel.com>
> > Sent: Friday, July 23, 2021 8:47 PM
> > To: Xia, Chenbo <chenbo.xia@intel.com>; dev@dpdk.org; thomas@monjalon.net
> > Cc: mdr@ashroe.eu; nhorman@tuxdriver.com; david.marchand@redhat.com
> > Subject: Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus
> > driver
> > 
> > On 7/23/2021 8:39 AM, Xia, Chenbo wrote:
> > > Hi,
> > >
> > > A gentle ping for comments..
> > >
> > >> -----Original Message-----
> > >> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
> > >> Sent: Tuesday, June 1, 2021 4:42 PM
> > >> To: dev@dpdk.org; thomas@monjalon.net
> > >> Cc: mdr@ashroe.eu; nhorman@tuxdriver.com
> > >> Subject: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus
> > driver
> > >>
> > >> All ABIs in PCI bus driver, which are defined in rte_buc_pci.h,
> > >> will be removed and the header will be made internal.
> > >>
> > >> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
> > >> ---
> > >>  doc/guides/rel_notes/deprecation.rst | 5 +++++
> > >>  1 file changed, 5 insertions(+)
> > >>
> > >> diff --git a/doc/guides/rel_notes/deprecation.rst
> > >> b/doc/guides/rel_notes/deprecation.rst
> > >> index 9584d6bfd7..b01f46c62e 100644
> > >> --- a/doc/guides/rel_notes/deprecation.rst
> > >> +++ b/doc/guides/rel_notes/deprecation.rst
> > >> @@ -147,3 +147,8 @@ Deprecation Notices
> > >>  * cmdline: ``cmdline`` structure will be made opaque to hide platform-
> > >> specific
> > >>    content. On Linux and FreeBSD, supported prior to DPDK 20.11,
> > >>    original structure will be kept until DPDK 21.11.
> > >> +
> > >> +* pci: To reduce unnecessary ABIs exposed by DPDK bus driver,
> > "rte_bus_pci.h"
> > >> +  will be made internal in 21.11 and macros/data structures/functions
> > defined
> > >> +  in the header will not be considered as ABI anymore. This change is
> > >> inspired
> > >> +  by the RFC
> > https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
> > >
> > > I see there's some ABI improvement work on-going and I think it could be
> > part of
> > > the work. If it makes sense to you, I'd like some ACKs.
> > >
> > 
> > Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > 
> > I am for reducing the public ABI as much as possible. How big will the
> > change
> > be? Is the 'rte_bus_pci.h' used other than './drivers/bus/pci/'?
> 
> I don't see big change here. And I am not sure if I understand your second
> question. The rte_bus_pci.h will still be used by drivers (maybe remove the
> rte prefix and change the file name).
> 
The file itself will still be exported in some cases, where the end-user
has their own drivers which need to be compiled, so I'd recommend keeping
the rte_ prefix. However, I think making all bus APIs internal-only to DPDK
is a good idea.

Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
    2021-07-23  7:39  3% ` Xia, Chenbo
@ 2021-07-27 10:58  0% ` Ananyev, Konstantin
  1 sibling, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2021-07-27 10:58 UTC (permalink / raw)
  To: Xia, Chenbo, dev, thomas; +Cc: mdr, nhorman

> 
> All ABIs in PCI bus driver, which are defined in rte_buc_pci.h,
> will be removed and the header will be made internal.
> 
> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 9584d6bfd7..b01f46c62e 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -147,3 +147,8 @@ Deprecation Notices
>  * cmdline: ``cmdline`` structure will be made opaque to hide platform-specific
>    content. On Linux and FreeBSD, supported prior to DPDK 20.11,
>    original structure will be kept until DPDK 21.11.
> +
> +* pci: To reduce unnecessary ABIs exposed by DPDK bus driver, "rte_bus_pci.h"
> +  will be made internal in 21.11 and macros/data structures/functions defined
> +  in the header will not be considered as ABI anymore. This change is inspired
> +  by the RFC https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
> --

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] [PATCH 01/10] security: add support for TSO on IPsec session
  @ 2021-07-27 18:34  3%   ` Akhil Goyal
  2021-07-29  8:37  0%     ` Nicolau, Radu
  2021-07-31 17:50  0%     ` Akhil Goyal
  0 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2021-07-27 18:34 UTC (permalink / raw)
  To: Radu Nicolau, Tejasree Kondoj, Declan Doherty
  Cc: Anoob Joseph, dev, Abhijit Sinha, Daniel Martin Buckley, Ankur Dwivedi

> Allow user to provision a per security session maximum segment size
> (MSS) for use when Transmit Segmentation Offload (TSO) is supported.
> The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
> ol_flags are specified in mbuf.
> 
> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> ---
Can we have deprecation notice for the changes introduced in this series.

Also there are 2 other features which modify same struct. Can we have a
Single deprecation notice for all the changes in the rte_security_ipsec_sa_options?
The notice can be something like:
+* security: The IPsec SA config options structure ``struct rte_security_ipsec_sa_options``
+  will be updated to support more features.
And we may have a reserved bit fields for rest of the vacant bits so that ABI is not broken
When a new bit field is added.

http://patches.dpdk.org/project/dpdk/patch/20210630112049.3747-1-marchana@marvell.com/
http://patches.dpdk.org/project/dpdk/patch/20210705131335.21070-1-ktejasree@marvell.com/

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] RFC: Enahancements to Rx adapter for DPDK 21.11
  @ 2021-07-28  6:08  4%   ` Jerin Jacob
  2021-07-28  6:23  4%     ` Kundapura, Ganapati
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-07-28  6:08 UTC (permalink / raw)
  To: Kundapura, Ganapati; +Cc: dpdk-dev, Jayatheerthan, Jay

On Mon, Jul 26, 2021 at 6:37 PM Kundapura, Ganapati
<ganapati.kundapura@intel.com> wrote:
>
> A gentle ping for comments.
>
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Kundapura, Ganapati
> Sent: 23 July 2021 12:33
> To: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinjacobk@gmail.com>; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> Subject: [dpdk-dev] RFC: Enahancements to Rx adapter for DPDK 21.11
>
> Hi dpdk-dev,
>
> We would like to submit series of patches to Rx adapters that will enhance the configuration and performance.
> Please find the details below.
>
> (1) Configure Rx event buffer at run time
>     Add new api to configure the size of the Rx event buffer at run time.
>     This api allows setting the size of the event buffer at adapter level.

Since we can change ABI for 21.11, Not prefer to add a new API instead
add a param to config structure.
Please send the deprecation notice for ABI change.

>
> (2) Change packet enqueue buffer in Rx adapter to circular buffer
>     Rx adapter uses memmove() to move unprocessed events to the begining
>     of packet enqueue buffer which consumes good amount of CPU cycles.

Looks good.


>
> (3) Add API to retrieve the Rx queue info
>     Rx queue info containinin  flags for handling received packets,
>     event queue identifier, schedular type, event priority,
>     polling frequence of the receive queue and flow identifier

Looks good. Please implement it as adaptor ops so that it can be
adapter specific to
support HW implementations.



>
> (4) Add adapter_stats cli to retrive Rx/Tx adapter stats and rxq info
>     This cli displays Rx and Tx adapter stats containing recieved packet count,
>     eventdev enqueue count, enqueue retry count, event buffer size, queue poll count,
>     transmitted packet count, packet dropped count, transmit fail count etc and rx queue info.

Generally, we don't entertain CLI in the library. You can add
command-line arguments to app/test-eventdev
to test this.

>
> (5) Update Rx timestamp in mbuf using mbuf dynamic field
>     Add support to register timestamp dynamic field in mbuf
>     Update the timestamp in mbuf for each packet before eventdev enqueue

Cool.

>
> We look forward to feedback on this proposal. Once we have initial feedback, patches will be submitted for review.
>
> Thanks,
> Ganapati

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] RFC: Enahancements to Rx adapter for DPDK 21.11
  2021-07-28  6:08  4%   ` Jerin Jacob
@ 2021-07-28  6:23  4%     ` Kundapura, Ganapati
  2021-07-30 11:17  0%       ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Kundapura, Ganapati @ 2021-07-28  6:23 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: dpdk-dev, Jayatheerthan, Jay

Comments inlined

-----Original Message-----
From: Jerin Jacob <jerinjacobk@gmail.com> 
Sent: 28 July 2021 11:38
To: Kundapura, Ganapati <ganapati.kundapura@intel.com>
Cc: dpdk-dev <dev@dpdk.org>; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
Subject: Re: RFC: Enahancements to Rx adapter for DPDK 21.11

On Mon, Jul 26, 2021 at 6:37 PM Kundapura, Ganapati <ganapati.kundapura@intel.com> wrote:
>
> A gentle ping for comments.
>
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Kundapura, Ganapati
> Sent: 23 July 2021 12:33
> To: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinjacobk@gmail.com>; 
> Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> Subject: [dpdk-dev] RFC: Enahancements to Rx adapter for DPDK 21.11
>
> Hi dpdk-dev,
>
> We would like to submit series of patches to Rx adapters that will enhance the configuration and performance.
> Please find the details below.
>
> (1) Configure Rx event buffer at run time
>     Add new api to configure the size of the Rx event buffer at run time.
>     This api allows setting the size of the event buffer at adapter level.

Since we can change ABI for 21.11, Not prefer to add a new API instead add a param to config structure.
Please send the deprecation notice for ABI change.

Config structure passed to rte_event_eth_rx_adapter_create() is of type rte_event_port_conf which
comes from event framework(rte_eventdev.h). 
Does it make sense to pass adapter event buffer size in rte_event_port_conf structure?

>
> (2) Change packet enqueue buffer in Rx adapter to circular buffer
>     Rx adapter uses memmove() to move unprocessed events to the begining
>     of packet enqueue buffer which consumes good amount of CPU cycles.

Looks good.


>
> (3) Add API to retrieve the Rx queue info
>     Rx queue info containinin  flags for handling received packets,
>     event queue identifier, schedular type, event priority,
>     polling frequence of the receive queue and flow identifier

Looks good. Please implement it as adaptor ops so that it can be adapter specific to support HW implementations.



>
> (4) Add adapter_stats cli to retrive Rx/Tx adapter stats and rxq info
>     This cli displays Rx and Tx adapter stats containing recieved packet count,
>     eventdev enqueue count, enqueue retry count, event buffer size, queue poll count,
>     transmitted packet count, packet dropped count, transmit fail count etc and rx queue info.

Generally, we don't entertain CLI in the library. You can add command-line arguments to app/test-eventdev to test this.

Adapter_stats is standalone application not part of library and it'll be in app/adapter_stats. 
>
> (5) Update Rx timestamp in mbuf using mbuf dynamic field
>     Add support to register timestamp dynamic field in mbuf
>     Update the timestamp in mbuf for each packet before eventdev 
> enqueue

Cool.

>
> We look forward to feedback on this proposal. Once we have initial feedback, patches will be submitted for review.
>
> Thanks,
> Ganapati

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] 回复: [PATCH v1 2/2] devtools: use absolute path for the build directory
  @ 2021-07-28  7:20  0%   ` Feifei Wang
  0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2021-07-28  7:20 UTC (permalink / raw)
  To: Bruce Richardson; +Cc: dev, nd, Juraj Linkeš, Ruifeng Wang, nd

Hi, Bruce

Sorry to disturb you again. Would you please help review the second patch
of this series? Thanks very much.

Best Regards
Feifei

> -----邮件原件-----
> 发件人: Feifei Wang <feifei.wang2@arm.com>
> 发送时间: Tuesday, June 1, 2021 9:57 AM
> 收件人: Bruce Richardson <bruce.richardson@intel.com>
> 抄送: dev@dpdk.org; nd <nd@arm.com>; Phil Yang <Phil.Yang@arm.com>;
> Juraj Linkeš <juraj.linkes@pantheon.tech>; Feifei Wang
> <Feifei.Wang2@arm.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
> 主题: [PATCH v1 2/2] devtools: use absolute path for the build directory
> 
> From: Phil Yang <phil.yang@arm.com>
> 
> To make the code easier to maintain, use the absolute path for the default
> build_dir to avoid repeatedly calling of readlink.
> 
> Suggested-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> Signed-off-by: Phil Yang <phil.yang@arm.com>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Reviewed-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
>  devtools/test-meson-builds.sh | 14 +++++++-------
>  1 file changed, 7 insertions(+), 7 deletions(-)
> 
> diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
> index 43b906598d..d6b0e7e059 100755
> --- a/devtools/test-meson-builds.sh
> +++ b/devtools/test-meson-builds.sh
> @@ -16,7 +16,7 @@ srcdir=$(dirname $(readlink -f $0))/..
> 
>  MESON=${MESON:-meson}
>  use_shared="--default-library=shared"
> -builds_dir=${DPDK_BUILD_TEST_DIR:-.}
> +builds_dir=$(readlink -f ${DPDK_BUILD_TEST_DIR:-.})
> 
>  if command -v gmake >/dev/null 2>&1 ; then
>  	MAKE=gmake
> @@ -193,16 +193,16 @@ build () # <directory> <target cc | cross file> <ABI
> check> [meson options]
>  		fi
> 
>  		install_target $builds_dir/$targetdir \
> -			$(readlink -f $builds_dir/$targetdir/install)
> +			$builds_dir/$targetdir/install
>  		echo "Checking ABI compatibility of $targetdir" >&$verbose
>  		echo $srcdir/devtools/gen-abi.sh \
> -			$(readlink -f
> $builds_dir/$targetdir/install) >&$veryverbose
> +			$builds_dir/$targetdir/install >&$veryverbose
>  		$srcdir/devtools/gen-abi.sh \
> -			$(readlink -f
> $builds_dir/$targetdir/install) >&$veryverbose
> +			$builds_dir/$targetdir/install >&$veryverbose
>  		echo $srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
> -			$(readlink -f
> $builds_dir/$targetdir/install) >&$veryverbose
> +			$builds_dir/$targetdir/install >&$veryverbose
>  		$srcdir/devtools/check-abi.sh $abirefdir/$targetdir \
> -			$(readlink -f
> $builds_dir/$targetdir/install) >&$verbose
> +			$builds_dir/$targetdir/install >&$verbose
>  	fi
>  }
> 
> @@ -275,7 +275,7 @@ done
>  # Test installation of the x86-generic target, to be used for checking  # the
> sample apps build using the pkg-config file for cflags and libs  load_env cc -
> build_path=$(readlink -f $builds_dir/build-x86-generic)
> +build_path=$builds_dir/build-x86-generic
>  export DESTDIR=$build_path/install
>  install_target $build_path $DESTDIR
>  pc_file=$(find $DESTDIR -name libdpdk.pc)
> --
> 2.25.1


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
  2021-07-27  8:44  0%       ` Bruce Richardson
@ 2021-07-28 15:32  0%         ` Andrew Rybchenko
  2021-07-31 20:44  0%         ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-07-28 15:32 UTC (permalink / raw)
  To: Bruce Richardson, Xia, Chenbo
  Cc: Yigit, Ferruh, dev, thomas, mdr, nhorman, david.marchand

On 7/27/21 11:44 AM, Bruce Richardson wrote:
> On Mon, Jul 26, 2021 at 05:56:17AM +0000, Xia, Chenbo wrote:
>> Hi, Ferruh
>>
>>> -----Original Message-----
>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>> Sent: Friday, July 23, 2021 8:47 PM
>>> To: Xia, Chenbo <chenbo.xia@intel.com>; dev@dpdk.org; thomas@monjalon.net
>>> Cc: mdr@ashroe.eu; nhorman@tuxdriver.com; david.marchand@redhat.com
>>> Subject: Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus
>>> driver
>>>
>>> On 7/23/2021 8:39 AM, Xia, Chenbo wrote:
>>>> Hi,
>>>>
>>>> A gentle ping for comments..
>>>>
>>>>> -----Original Message-----
>>>>> From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
>>>>> Sent: Tuesday, June 1, 2021 4:42 PM
>>>>> To: dev@dpdk.org; thomas@monjalon.net
>>>>> Cc: mdr@ashroe.eu; nhorman@tuxdriver.com
>>>>> Subject: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus
>>> driver
>>>>>
>>>>> All ABIs in PCI bus driver, which are defined in rte_buc_pci.h,
>>>>> will be removed and the header will be made internal.
>>>>>
>>>>> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
>>>>> ---
>>>>>   doc/guides/rel_notes/deprecation.rst | 5 +++++
>>>>>   1 file changed, 5 insertions(+)
>>>>>
>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>> index 9584d6bfd7..b01f46c62e 100644
>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>> @@ -147,3 +147,8 @@ Deprecation Notices
>>>>>   * cmdline: ``cmdline`` structure will be made opaque to hide platform-
>>>>> specific
>>>>>     content. On Linux and FreeBSD, supported prior to DPDK 20.11,
>>>>>     original structure will be kept until DPDK 21.11.
>>>>> +
>>>>> +* pci: To reduce unnecessary ABIs exposed by DPDK bus driver,
>>> "rte_bus_pci.h"
>>>>> +  will be made internal in 21.11 and macros/data structures/functions
>>> defined
>>>>> +  in the header will not be considered as ABI anymore. This change is
>>>>> inspired
>>>>> +  by the RFC
>>> https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
>>>>
>>>> I see there's some ABI improvement work on-going and I think it could be
>>> part of
>>>> the work. If it makes sense to you, I'd like some ACKs.
>>>>
>>>
>>> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>>
>>> I am for reducing the public ABI as much as possible. How big will the
>>> change
>>> be? Is the 'rte_bus_pci.h' used other than './drivers/bus/pci/'?
>>
>> I don't see big change here. And I am not sure if I understand your second
>> question. The rte_bus_pci.h will still be used by drivers (maybe remove the
>> rte prefix and change the file name).
>>
> The file itself will still be exported in some cases, where the end-user
> has their own drivers which need to be compiled, so I'd recommend keeping
> the rte_ prefix. However, I think making all bus APIs internal-only to DPDK
> is a good idea.
> 
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> 

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-07-20  8:59  0%           ` Andrew Rybchenko
@ 2021-07-29  4:13  0%             ` Xueming(Steven) Li
  2021-08-01  8:40  0%               ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-07-29  4:13 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Tuesday, July 20, 2021 5:00 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang
> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> 
> On 7/19/21 3:50 PM, Xueming(Steven) Li wrote:
> >
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Monday, July 19, 2021 8:36 PM
> >> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde
> >> <ajit.khaparde@broadcom.com>; Somnath Kotur
> >> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong
> >> Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>;
> >> Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>;
> >> Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> >> Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> >> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> >> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> >> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> >>
> >> On 7/19/21 2:54 PM, Xueming(Steven) Li wrote:
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> Sent: Monday, July 19, 2021 4:46 PM
> >>>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde
> >>>> <ajit.khaparde@broadcom.com>; Somnath Kotur
> >>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>;
> >>>> Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> >>>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi
> >>>> Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>;
> >>>> Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
> >>>> Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
> >>>> Monjalon <thomas@monjalon.net>; Ferruh Yigit
> >>>> <ferruh.yigit@intel.com>
> >>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >>>> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> >>>>
> >>>> On 7/19/21 9:58 AM, Xueming(Steven) Li wrote:
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>>>> Sent: Tuesday, July 13, 2021 12:18 AM
> >>>>>> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> >>>>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>;
> >>>>>> Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> >>>>>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi
> >>>>>> Zhang <qi.z.zhang@intel.com>; Haiyue Wang
> >>>>>> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf
> >>>>>> Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> >>>>>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> >>>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
> >>>>>> Xueming(Steven) Li <xuemingl@nvidia.com>
> >>>>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >>>>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >>>>>> Subject: [PATCH] ethdev: fix representor port ID search by name
> >>>>>>
> >>>>>> From: Viacheslav Galaktionov
> >>>>>> <viacheslav.galaktionov@oktetlabs.ru>
> >>>>>>
> >>>>>> Fix representor port ID search by name if the representor itself
> >>>>>> does not provide representors info. Getting a list of
> >>>>>> representors from a representor does not make sense. Instead, a
> >>>>>> parent device
> >>>> should be used.
> >>>>>>
> >>>>>> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
> >>>>>>
> >>>>>> Fixes: df7547a6a2cc ("ethdev: add helper function to get
> >>>>>> representor
> >>>>>> ID")
> >>>>>> Cc: stable@dpdk.org
> >>>>>>
> >>>>>> Signed-off-by: Viacheslav Galaktionov
> >>>>>> <viacheslav.galaktionov@oktetlabs.ru>
> >>>>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>>>> ---
> >>>>>> The new field is added into the hole in rte_eth_dev_data structure.
> >>>>>> The patch does not change ABI, but extra care is required since
> >>>>>> ABI check is disabled for the structure because of the libabigail
> >>>>>> bug
> >>>> [1].
> >>>>>>
> >>>>>> Potentially it is bad for out-of-tree drivers which implement
> >>>>>> representors but do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we care?
> >>>>>>
> >>>>>> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
> >>>>>>
> >>>>>> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
> >>>>>>
> >>>>>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> >>>>
> >>>> [snip]
> >>>>
> >>>>>> --- a/lib/ethdev/ethdev_driver.h
> >>>>>> +++ b/lib/ethdev/ethdev_driver.h
> >>>>>> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
> >>>>>>      * For backward compatibility, if no representor info, direct
> >>>>>>      * map legacy VF (no controller and pf).
> >>>>>>      *
> >>>>>> - * @param ethdev
> >>>>>> - *  Handle of ethdev port.
> >>>>>> + * @param parent_port_id
> >>>>>> + *  Port ID of the backing device.
> >>>>>>      * @param type
> >>>>>>      *  Representor type.
> >>>>>>      * @param controller
> >>>>>> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
> >>>>>>      */
> >>>>>>     __rte_internal
> >>>>>>     int
> >>>>>> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> >>>>>> +rte_eth_representor_id_get(uint16_t parent_port_id,
> >>>>>
> >>>>> It make more sense to get representor info from parent port.
> >>>>> Representor is a member of switch domain, PMD owns the information
> >>>>> of the representor owner port and info of representors. This
> >>>>> change looks better, but not sure whether it valuable to introduce
> >>>>> a new
> >>>> member to the EAL data structure.
> >>>>
> >>>> IMHO, it is simply incorrect to return representors info on a
> >>>> representor itself. Representor info is an information which representors may be populated using the device.
> >>>>
> >>>> If above statement is correct, we need a way to get parent device
> >>>> by representor to do name to representor ID mapping. I see two options to do it:
> >>>>     A. Dedicated field in rte_eth_dev_data as the patch does.
> >>>>     B. Dedicated ethdev op (since representor knows parent port ID anyway).
> >>>> We have chosen (A) because of simplicity.
> >>>
> >>> Just recalled that representor port could be probed w/o owner PF, is a force for parent port?
> >>
> >> I thought that it is impossible and parent port is absolutely
> >> required for a representor. Could you provide an example and explain how will it work?
> >
> > In case of bonding, PF0 and PF1 become one PF port `bond0`, PCI address is PF0.
> > 	-a <PF0>,representor=pf[0-1]vf[0-99] // this is the syntax we proposed.
> 
> Is it net/bonding or vendor-specific bonding in HW?
> If I remember correctly in the case of net/bonding we have ethdev ports for bonded devices.

Not net/bonding pmd, it's Linux bonding, supported by hw driver.

> 
> >
> > To be backward compatible, also support the following 2 devargs:
> > 	-a <pf0>,representor=[0-99] // probe bond0 and representor on pf0
> > 	-a <pf1>,representor=[0-99] // probe representors on pf1.
> > If devargs start with PF1 devargs, no owner PF1 created as it disabled
> > in bonding. Can't create bond0(PF0) automatically here as device is located by PCI address(PF1) from devargs.
> 
> So, I guess the problem is vendor-specific bonding in HW. Anyway legacy backward compatible representor spec should not require
> representors info since it worked before without it. So, it does not sound like a reason to have representors info on a representor
> itself.

Legacy backward logic could be something like this: if PF owner port found, use it, fallback to current representor.
This won't break anything I guess, how do you think?

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
    2021-07-19  6:58  0% ` Xueming(Steven) Li
@ 2021-07-29  4:20  0% ` Xueming(Steven) Li
  2021-08-01  8:50  0%   ` Andrew Rybchenko
  2021-08-18 14:00  3% ` [dpdk-dev] [PATCH v2] " Andrew Rybchenko
  2021-08-20 12:18  3% ` [dpdk-dev] [PATCH v3] " Andrew Rybchenko
  3 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-07-29  4:20 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Tuesday, July 13, 2021 12:18 AM
> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur <somnath.kotur@broadcom.com>; John Daley
> <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>; Qiming Yang
> <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Xueming(Steven) Li <xuemingl@nvidia.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> Subject: [PATCH] ethdev: fix representor port ID search by name
> 
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> 
> Fix representor port ID search by name if the representor itself does not provide representors info. Getting a list of representors from
> a representor does not make sense. Instead, a parent device should be used.
> 
> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
> 
> Fixes: df7547a6a2cc ("ethdev: add helper function to get representor ID")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI check is disabled for the structure because of the libabigail bug [1].
> 
> Potentially it is bad for out-of-tree drivers which implement representors but do not fill in a new parert_port_id field in
> rte_eth_dev_data structure. Do we care?
> 
> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
> 
> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
> 
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> 
>  drivers/net/bnxt/bnxt_reps.c             |  1 +
>  drivers/net/enic/enic_vf_representor.c   |  1 +
>  drivers/net/i40e/i40e_vf_representor.c   |  1 +
>  drivers/net/ice/ice_dcf_vf_representor.c |  1 +  drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
>  drivers/net/mlx5/linux/mlx5_os.c         | 11 +++++++++++
>  drivers/net/mlx5/windows/mlx5_os.c       | 11 +++++++++++
>  lib/ethdev/ethdev_driver.h               |  6 +++---
>  lib/ethdev/rte_class_eth.c               |  2 +-
>  lib/ethdev/rte_ethdev.c                  |  8 ++++----
>  lib/ethdev/rte_ethdev_core.h             |  4 ++++
>  11 files changed, 39 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c index bdbad53b7d..902591cd39 100644
> --- a/drivers/net/bnxt/bnxt_reps.c
> +++ b/drivers/net/bnxt/bnxt_reps.c
> @@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
>  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	eth_dev->data->representor_id = rep_params->vf_id;
> +	eth_dev->data->parent_port_id = rep_params->parent_dev->data->port_id;
> 
>  	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
>  	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr, diff --git a/drivers/net/enic/enic_vf_representor.c
> b/drivers/net/enic/enic_vf_representor.c
> index 79dd6e5640..6ee7967ce9 100644
> --- a/drivers/net/enic/enic_vf_representor.c
> +++ b/drivers/net/enic/enic_vf_representor.c
> @@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
>  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	eth_dev->data->representor_id = vf->vf_id;
> +	eth_dev->data->parent_port_id = pf->port_id;
>  	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
>  		sizeof(struct rte_ether_addr) *
>  		ENIC_UNICAST_PERFECT_FILTERS, 0);
> diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
> index 0481b55381..865b637585 100644
> --- a/drivers/net/i40e/i40e_vf_representor.c
> +++ b/drivers/net/i40e/i40e_vf_representor.c
> @@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
>  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->parent_port_id = pf->dev_data->parent_port_id;
> 
>  	/* Setting the number queues allocated to the VF */
>  	ethdev->data->nb_rx_queues = vf->vsi->nb_qps; diff --git a/drivers/net/ice/ice_dcf_vf_representor.c
> b/drivers/net/ice/ice_dcf_vf_representor.c
> index 970461f3e9..c7cd3fd290 100644
> --- a/drivers/net/ice/ice_dcf_vf_representor.c
> +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> @@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
> 
>  	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  	vf_rep_eth_dev->data->representor_id = repr->vf_id;
> +	vf_rep_eth_dev->data->parent_port_id =
> +repr->dcf_eth_dev->data->port_id;
> 
>  	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
> 
> diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
> index d5b636a194..7a2063849e 100644
> --- a/drivers/net/ixgbe/ixgbe_vf_representor.c
> +++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
> @@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
> 
>  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->parent_port_id = representor->pf_ethdev->data->port_id;
> 
>  	/* Set representor device ops */
>  	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops; diff --git a/drivers/net/mlx5/linux/mlx5_os.c
> b/drivers/net/mlx5/linux/mlx5_os.c
> index be22d9cbd2..5550d30628 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -1511,6 +1511,17 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>  	if (priv->representor) {
>  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
> +			const struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +
> +			if (!opriv ||
> +			    opriv->sh != priv->sh ||
> +			    opriv->representor)
> +				continue;
> +			eth_dev->data->parent_port_id = port_id;
> +			break;
> +		}

At line 126, there is a logic that locate priv->domain_id, parent port_id could be found there.

>  	}
>  	priv->mp_id.port_id = eth_dev->data->port_id;
>  	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); diff --git a/drivers/net/mlx5/windows/mlx5_os.c
> b/drivers/net/mlx5/windows/mlx5_os.c
> index e30b682822..037c928dc1 100644
> --- a/drivers/net/mlx5/windows/mlx5_os.c
> +++ b/drivers/net/mlx5/windows/mlx5_os.c
> @@ -506,6 +506,17 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>  	if (priv->representor) {
>  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
> +			const struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +
> +			if (!opriv ||
> +			    opriv->sh != priv->sh ||
> +			    opriv->representor)
> +				continue;
> +			eth_dev->data->parent_port_id = port_id;
> +			break;
> +		}
>  	}
>  	/*
>  	 * Store associated network device interface index. This index diff --git a/lib/ethdev/ethdev_driver.h
> b/lib/ethdev/ethdev_driver.h index 40e474aa7e..07f6d1f9a4 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
>   * For backward compatibility, if no representor info, direct
>   * map legacy VF (no controller and pf).
>   *
> - * @param ethdev
> - *  Handle of ethdev port.
> + * @param parent_port_id
> + *  Port ID of the backing device.
>   * @param type
>   *  Representor type.
>   * @param controller
> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
>   */
>  __rte_internal
>  int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t parent_port_id,
>  			   enum rte_eth_representor_type type,
>  			   int controller, int pf, int representor_port,
>  			   uint16_t *repr_id);
> diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index 1fe5fa1f36..e3b7ab9728 100644
> --- a/lib/ethdev/rte_class_eth.c
> +++ b/lib/ethdev/rte_class_eth.c
> @@ -95,7 +95,7 @@ eth_representor_cmp(const char *key __rte_unused,
>  		c = i / (np * nf);
>  		p = (i / nf) % np;
>  		f = i % nf;
> -		if (rte_eth_representor_id_get(edev,
> +		if (rte_eth_representor_id_get(edev->data->parent_port_id,
>  			eth_da.type,
>  			eth_da.nb_mh_controllers == 0 ? -1 :
>  					eth_da.mh_controllers[c],
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 6ebf52b641..acda1d43fb 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -5997,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)  }
> 
>  int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t parent_port_id,
>  			   enum rte_eth_representor_type type,
>  			   int controller, int pf, int representor_port,
>  			   uint16_t *repr_id)
> @@ -6012,7 +6012,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>  		return -EINVAL;
> 
>  	/* Get PMD representor range info. */
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
> +	ret = rte_eth_representor_info_get(parent_port_id, NULL);
>  	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
>  	    controller == -1 && pf == -1) {
>  		/* Direct mapping for legacy VF representor. */ @@ -6026,7 +6026,7 @@ rte_eth_representor_id_get(const struct
> rte_eth_dev *ethdev,
>  	info = calloc(1, size);
>  	if (info == NULL)
>  		return -ENOMEM;
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
> +	ret = rte_eth_representor_info_get(parent_port_id, info);
>  	if (ret < 0)
>  		goto out;
> 
> @@ -6045,7 +6045,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>  			continue;
>  		if (info->ranges[i].id_end < info->ranges[i].id_base) {
>  			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
> -				ethdev->data->port_id, info->ranges[i].id_base,
> +				parent_port_id, info->ranges[i].id_base,
>  				info->ranges[i].id_end, i);
>  			continue;
> 
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index edf96de2dc..13cb84b52f 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -185,6 +185,10 @@ struct rte_eth_dev_data {
>  			/**< Switch-specific identifier.
>  			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
>  			 */
> +	uint16_t parent_port_id;
> +			/**< Port ID of the backing device.
> +			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> +			 */
> 
>  	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
>  	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> --
> 2.30.2


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] [PATCH 01/10] security: add support for TSO on IPsec session
  2021-07-27 18:34  3%   ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-07-29  8:37  0%     ` Nicolau, Radu
  2021-07-31 17:50  0%     ` Akhil Goyal
  1 sibling, 0 replies; 200+ results
From: Nicolau, Radu @ 2021-07-29  8:37 UTC (permalink / raw)
  To: Akhil Goyal, Tejasree Kondoj, Declan Doherty
  Cc: Anoob Joseph, dev, Abhijit Sinha, Daniel Martin Buckley, Ankur Dwivedi

Hi, thanks for reviewing. I'm OOO at the moment, I will send an updated 
patchset next week.

On 7/27/2021 9:34 PM, Akhil Goyal wrote:
>> Allow user to provision a per security session maximum segment size
>> (MSS) for use when Transmit Segmentation Offload (TSO) is supported.
>> The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
>> ol_flags are specified in mbuf.
>>
>> Signed-off-by: Declan Doherty <declan.doherty@intel.com>
>> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
>> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
>> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
>> ---
> Can we have deprecation notice for the changes introduced in this series.
>
> Also there are 2 other features which modify same struct. Can we have a
> Single deprecation notice for all the changes in the rte_security_ipsec_sa_options?
> The notice can be something like:
> +* security: The IPsec SA config options structure ``struct rte_security_ipsec_sa_options``
> +  will be updated to support more features.
> And we may have a reserved bit fields for rest of the vacant bits so that ABI is not broken
> When a new bit field is added.
>
> http://patches.dpdk.org/project/dpdk/patch/20210630112049.3747-1-marchana@marvell.com/
> http://patches.dpdk.org/project/dpdk/patch/20210705131335.21070-1-ktejasree@marvell.com/

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] RFC: Enahancements to Rx adapter for DPDK 21.11
  2021-07-28  6:23  4%     ` Kundapura, Ganapati
@ 2021-07-30 11:17  0%       ` Jerin Jacob
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2021-07-30 11:17 UTC (permalink / raw)
  To: Kundapura, Ganapati; +Cc: dpdk-dev, Jayatheerthan, Jay

On Wed, Jul 28, 2021 at 11:53 AM Kundapura, Ganapati
<ganapati.kundapura@intel.com> wrote:
>
> Comments inlined

Please fix your email client for adding proper >

>
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: 28 July 2021 11:38
> To: Kundapura, Ganapati <ganapati.kundapura@intel.com>
> Cc: dpdk-dev <dev@dpdk.org>; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> Subject: Re: RFC: Enahancements to Rx adapter for DPDK 21.11
>
> On Mon, Jul 26, 2021 at 6:37 PM Kundapura, Ganapati <ganapati.kundapura@intel.com> wrote:
> >
> > A gentle ping for comments.
> >
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Kundapura, Ganapati
> > Sent: 23 July 2021 12:33
> > To: dpdk-dev <dev@dpdk.org>; Jerin Jacob <jerinjacobk@gmail.com>;
> > Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> > Subject: [dpdk-dev] RFC: Enahancements to Rx adapter for DPDK 21.11
> >
> > Hi dpdk-dev,
> >
> > We would like to submit series of patches to Rx adapters that will enhance the configuration and performance.
> > Please find the details below.
> >
> > (1) Configure Rx event buffer at run time
> >     Add new api to configure the size of the Rx event buffer at run time.
> >     This api allows setting the size of the event buffer at adapter level.
>
> Since we can change ABI for 21.11, Not prefer to add a new API instead add a param to config structure.
> Please send the deprecation notice for ABI change.
>
> Config structure passed to rte_event_eth_rx_adapter_create() is of type rte_event_port_conf which
> comes from event framework(rte_eventdev.h).
> Does it make sense to pass adapter event buffer size in rte_event_port_conf structure?

I see. Then new API is better to set the buffer is OK.


>
> >
> > (2) Change packet enqueue buffer in Rx adapter to circular buffer
> >     Rx adapter uses memmove() to move unprocessed events to the begining
> >     of packet enqueue buffer which consumes good amount of CPU cycles.
>
> Looks good.
>
>
> >
> > (3) Add API to retrieve the Rx queue info
> >     Rx queue info containinin  flags for handling received packets,
> >     event queue identifier, schedular type, event priority,
> >     polling frequence of the receive queue and flow identifier
>
> Looks good. Please implement it as adaptor ops so that it can be adapter specific to support HW implementations.
>
>
>
> >
> > (4) Add adapter_stats cli to retrive Rx/Tx adapter stats and rxq info
> >     This cli displays Rx and Tx adapter stats containing recieved packet count,
> >     eventdev enqueue count, enqueue retry count, event buffer size, queue poll count,
> >     transmitted packet count, packet dropped count, transmit fail count etc and rx queue info.
>
> Generally, we don't entertain CLI in the library. You can add command-line arguments to app/test-eventdev to test this.
>
> Adapter_stats is standalone application not part of library and it'll be in app/adapter_stats.

No need for a new app. Please add stats as telemetry, then it can be
pull through
usertools/dpdk-telemetry.py



> >
> > (5) Update Rx timestamp in mbuf using mbuf dynamic field
> >     Add support to register timestamp dynamic field in mbuf
> >     Update the timestamp in mbuf for each packet before eventdev
> > enqueue
>
> Cool.
>
> >
> > We look forward to feedback on this proposal. Once we have initial feedback, patches will be submitted for review.
> >
> > Thanks,
> > Ganapati

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce security API changes for Inline IPsec
  @ 2021-07-30 22:16  3% ` Thomas Monjalon
  2021-08-03  2:11  3%   ` Nithin Dabilpuram
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-07-30 22:16 UTC (permalink / raw)
  To: konstantin.ananyev, jerinj, gakhil, roy.fan.zhang,
	hemant.agrawal, Nithin Dabilpuram
  Cc: matan, dev, ferruh.yigit, bruce.richardson, mdr, david.marchand

27/07/2021 19:36, Nithin Dabilpuram:
> Announce changes to make rte_security_set_pkt_metadata() and
> rte_security_get_userdata() inline instead of C functions and
> also addition of another field in structure rte_security_ctx for
> holding flags.

I guess there is a performance reason but the motivation
is not explained. Also it is going in the opposite direction
of what is discussed in the Technical Board meetings:
we should avoid and reduce the number of inline functions
to reduce the ABI surface.



^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v11 00/10] eal: Add EAL API for threading
  @ 2021-07-30 22:31  3% ` Narcisa Ana Maria Vasile
  2021-08-02 17:32  3%   ` [dpdk-dev] [PATCH v12 " Narcisa Ana Maria Vasile
  0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-07-30 22:31 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

EAL thread API

**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.

**Goals**
* Introduce a generic EAL API for threading support that will remove
  the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
  RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
  3rd party thread library through a configuration option.

**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)

**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();

lib/librte_eal/common/rte_thread.c
int rte_thread_create() 
{
	return pthread_create();
}

lib/librte_eal/windows/rte_thread.c
int rte_thread_create() 
{
	return CreateThread();
}
-----------------------------------------------------

**Thread attributes**

When or after a thread is created, specific characteristics of the thread
can be adjusted. Given that the thread characteristics that are of interest
for DPDK applications are affinity and priority, the following structure
that represents thread attributes has been defined:

typedef struct
{
	enum rte_thread_priority priority;
	rte_cpuset_t cpuset;
} rte_thread_attr_t;

The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.

*Priority* is represented through an enum that currently advertises
two values for priority:
	- RTE_THREAD_PRIORITY_NORMAL
	- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority      - sets the priority of a thread
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
                               with a new value for priority

The user can choose thread priority through an EAL parameter,
when starting an application.  If EAL parameter is not used,
the per-platform default value for thread priority is used.
Otherwise administrator has an option to set one of available options:
 --thread-prio normal
 --thread-prio realtime

Example:
./dpdk-l2fwd -l 0-3 -n 4 –thread-prio normal -- -q 8 -p ffff

*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
                                   rte_thread_attr_t object
rte_thread_set/get_affinity      – sets/gets the affinity of a thread

**Errors**
A translation function that maps Windows error codes to errno-style
error codes is provided. 

**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Add support for pthread_mutex_trylock
* Additional functionality offered by pthread_*
  (such as pthread_setname_np, etc.)

v11:
 - Add unit tests for thread API
 - Rebase

v10:
 - Remove patch no. 10. It will be broken down in subpatches 
   and sent as a different patchset that depends on this one.
   This is done due to the ABI breaks that would be caused by patch 10.
 - Replace unix/rte_thread.c with common/rte_thread.c
 - Remove initializations that may prevent compiler from issuing useful
   warnings.
 - Remove rte_thread_types.h and rte_windows_thread_types.h
 - Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
 - Remove functions that retrieves thread handle from process handle
 - Remove rte_thread_cancel() until same behavior is obtained on
   all platforms.
 - Fix rte_thread_detach() function description,
   return value and remove empty line.
 - Reimplement mutex functions. Add compatible representation for mutex
   identifier. Add macro to replace static mutex initialization instances.
 - Fix commit messages (lines too long, remove unicode symbols)

v9:
- Sign patches

v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value

v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.

v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()

v5:
- update cover letter with more details on the priority argument

v4:
- fix function description
- rebase

v3:
- rebase

v2:
- revert changes that break ABI 
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c


Narcisa Vasile (10):
  eal: add basic threading functions
  eal: add thread attributes
  eal/windows: translate Windows errors to errno-style errors
  eal: implement functions for thread affinity management
  eal: implement thread priority management functions
  eal: add thread lifetime management
  eal: implement functions for mutex management
  eal: implement functions for thread barrier management
  eal: add EAL argument for setting thread priority
  Add unit tests for thread API

 app/test/meson.build                |   2 +
 app/test/test_threads.c             | 419 ++++++++++++++++++++
 lib/eal/common/eal_common_options.c |  28 +-
 lib/eal/common/eal_internal_cfg.h   |   2 +
 lib/eal/common/eal_options.h        |   2 +
 lib/eal/common/meson.build          |   1 +
 lib/eal/common/rte_thread.c         | 445 +++++++++++++++++++++
 lib/eal/include/rte_thread.h        | 406 ++++++++++++++++++-
 lib/eal/unix/meson.build            |   1 -
 lib/eal/unix/rte_thread.c           |  92 -----
 lib/eal/version.map                 |  20 +
 lib/eal/windows/eal_lcore.c         | 176 ++++++---
 lib/eal/windows/eal_windows.h       |  10 +
 lib/eal/windows/include/sched.h     |   2 +-
 lib/eal/windows/rte_thread.c        | 588 ++++++++++++++++++++++++++--
 15 files changed, 2020 insertions(+), 174 deletions(-)
 create mode 100644 app/test/test_threads.c
 create mode 100644 lib/eal/common/rte_thread.c
 delete mode 100644 lib/eal/unix/rte_thread.c

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] [PATCH 01/10] security: add support for TSO on IPsec session
  2021-07-27 18:34  3%   ` [dpdk-dev] [EXT] " Akhil Goyal
  2021-07-29  8:37  0%     ` Nicolau, Radu
@ 2021-07-31 17:50  0%     ` Akhil Goyal
  1 sibling, 0 replies; 200+ results
From: Akhil Goyal @ 2021-07-31 17:50 UTC (permalink / raw)
  To: Radu Nicolau, Declan Doherty, Abhijit Sinha, Daniel Martin Buckley
  Cc: Anoob Joseph, dev, Ankur Dwivedi, Tejasree Kondoj

> > Allow user to provision a per security session maximum segment size
> > (MSS) for use when Transmit Segmentation Offload (TSO) is supported.
> > The MSS value will be used when PKT_TX_TCP_SEG or PKT_TX_UDP_SEG
> > ol_flags are specified in mbuf.
> >
> > Signed-off-by: Declan Doherty <declan.doherty@intel.com>
> > Signed-off-by: Radu Nicolau <radu.nicolau@intel.com>
> > Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com>
> > Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com>
> > ---
> Can we have deprecation notice for the changes introduced in this series.
> 
> Also there are 2 other features which modify same struct. Can we have a
> Single deprecation notice for all the changes in the
> rte_security_ipsec_sa_options?
> The notice can be something like:
> +* security: The IPsec SA config options structure ``struct
> rte_security_ipsec_sa_options``
> +  will be updated to support more features.
> And we may have a reserved bit fields for rest of the vacant bits so that ABI is
> not broken
> When a new bit field is added.
> 
> http://patches.dpdk.org/project/dpdk/patch/20210630112049.3747-1-
> marchana@marvell.com/
> http://patches.dpdk.org/project/dpdk/patch/20210705131335.21070-1-
> ktejasree@marvell.com/

I have sent the consolidated deprecation notice for all three features.
Can you guys Ack it?
https://mails.dpdk.org/archives/dev/2021-July/215906.html

Also, please send deprecation notice for changes in ipsec xform as well.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements
@ 2021-07-31 18:13  8% Akhil Goyal
  2021-07-31 18:13  3% ` [dpdk-dev] [PATCH 1/4] cryptodev: remove LIST_END enumerators Akhil Goyal
                   ` (2 more replies)
  0 siblings, 3 replies; 200+ results
From: Akhil Goyal @ 2021-07-31 18:13 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	Akhil Goyal

This is a first series planned for ABI improvements
in cryptodev and security library.

Other planned improvements under development.
- cryptodev: export driver interface as internal
- cryptodev: split and hide struct rte_cryptodev, struct
rte_cryptodev_data
- cryptodev: hide struct rte_cryptodev_sym_session,
rte_cryptodev_asym_session
- security: hide struct rte_security_session

Request everyone to review and contribute for the missing
pieces to improve ABI stability. 

Akhil Goyal (4):
  cryptodev: remove LIST_END enumerators
  cryptodev: promote asym APIs to stable
  security: hide internal API
  security: add reserved bitfields

 app/test/test_cryptodev_asym.c     |  4 ++--
 devtools/libabigail.abignore       |  4 ++++
 drivers/crypto/qat/qat_asym.c      |  2 +-
 lib/cryptodev/rte_crypto_asym.h    |  4 ----
 lib/cryptodev/rte_cryptodev.h      | 10 ----------
 lib/cryptodev/version.map          | 24 +++++++++++++-----------
 lib/security/rte_security.h        |  6 ++++++
 lib/security/rte_security_driver.h |  2 +-
 lib/security/version.map           |  7 ++++++-
 9 files changed, 33 insertions(+), 30 deletions(-)

-- 
2.25.1


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH 1/4] cryptodev: remove LIST_END enumerators
  2021-07-31 18:13  8% [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements Akhil Goyal
@ 2021-07-31 18:13  3% ` Akhil Goyal
  2021-07-31 18:13  3% ` [dpdk-dev] [PATCH 4/4] security: add reserved bitfields Akhil Goyal
  2021-07-31 18:17  4% ` [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements Akhil Goyal
  2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-07-31 18:13 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	Akhil Goyal

Remove *_LIST_END enumerators from asymmetric crypto
lib to avoid ABI breakage for every new addition in
enums.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 app/test/test_cryptodev_asym.c  | 4 ++--
 drivers/crypto/qat/qat_asym.c   | 2 +-
 lib/cryptodev/rte_crypto_asym.h | 4 ----
 3 files changed, 3 insertions(+), 7 deletions(-)

diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 847b074a4f..afa0e91a45 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -542,7 +542,7 @@ test_one_case(const void *test_case, int sessionless)
 		printf("  %u) TestCase %s %s\n", test_index++,
 			tc.modex.description, test_msg);
 	} else {
-		for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
+		for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
 			if (tc.modex.xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
 				if (tc.rsa_data.op_type_flags & (1 << i)) {
 					if (tc.rsa_data.key_exp) {
@@ -1028,7 +1028,7 @@ static inline void print_asym_capa(
 			rte_crypto_asym_xform_strings[capa->xform_type]);
 	printf("operation supported -");
 
-	for (i = 0; i < RTE_CRYPTO_ASYM_OP_LIST_END; i++) {
+	for (i = 0; i <= RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE; i++) {
 		/* check supported operations */
 		if (rte_cryptodev_asym_xform_capability_check_optype(capa, i))
 			printf(" %s",
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 85973812a8..026625a4d2 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -742,7 +742,7 @@ qat_asym_session_configure(struct rte_cryptodev *dev,
 			err = -EINVAL;
 			goto error;
 		}
-	} else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
+	} else if (xform->xform_type > RTE_CRYPTO_ASYM_XFORM_ECPM
 			|| xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) {
 		QAT_LOG(ERR, "Invalid asymmetric crypto xform");
 		err = -EINVAL;
diff --git a/lib/cryptodev/rte_crypto_asym.h b/lib/cryptodev/rte_crypto_asym.h
index 9c866f553f..5edf658572 100644
--- a/lib/cryptodev/rte_crypto_asym.h
+++ b/lib/cryptodev/rte_crypto_asym.h
@@ -94,8 +94,6 @@ enum rte_crypto_asym_xform_type {
 	 */
 	RTE_CRYPTO_ASYM_XFORM_ECPM,
 	/**< Elliptic Curve Point Multiplication */
-	RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
-	/**< End of list */
 };
 
 /**
@@ -116,7 +114,6 @@ enum rte_crypto_asym_op_type {
 	/**< DH Public Key generation operation */
 	RTE_CRYPTO_ASYM_OP_SHARED_SECRET_COMPUTE,
 	/**< DH Shared Secret compute operation */
-	RTE_CRYPTO_ASYM_OP_LIST_END
 };
 
 /**
@@ -133,7 +130,6 @@ enum rte_crypto_rsa_padding_type {
 	/**< RSA PKCS#1 OAEP padding scheme */
 	RTE_CRYPTO_RSA_PADDING_PSS,
 	/**< RSA PKCS#1 PSS padding scheme */
-	RTE_CRYPTO_RSA_PADDING_TYPE_LIST_END
 };
 
 /**
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH 4/4] security: add reserved bitfields
  2021-07-31 18:13  8% [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements Akhil Goyal
  2021-07-31 18:13  3% ` [dpdk-dev] [PATCH 1/4] cryptodev: remove LIST_END enumerators Akhil Goyal
@ 2021-07-31 18:13  3% ` Akhil Goyal
  2021-07-31 18:17  4% ` [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements Akhil Goyal
  2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-07-31 18:13 UTC (permalink / raw)
  To: dev
  Cc: thomas, david.marchand, hemant.agrawal, anoobj,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang,
	Akhil Goyal

In struct rte_security_ipsec_sa_options, for every new option
added, there is an ABI breakage, to avoid, a reserved_opts
bitfield is added to for the remaining bits available in the
structure.
Now for every new sa option, these reserved_opts can be reduced
and new option can be added. A corresponding exception is also
added in devtools/libabigail.abignore

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 devtools/libabigail.abignore | 4 ++++
 lib/security/rte_security.h  | 6 ++++++
 2 files changed, 10 insertions(+)

diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 93158405e0..5d8da28e55 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -52,3 +52,7 @@
 ; https://sourceware.org/bugzilla/show_bug.cgi?id=28060
 [suppress_type]
 	name = rte_eth_dev_data
+
+; Ignore changes in reserved_opts bitfield of rte_security_ipsec_sa_options
+[suppress_variable]
+	name = reserved_opts
diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
index 88d31de0a6..4606425e8d 100644
--- a/lib/security/rte_security.h
+++ b/lib/security/rte_security.h
@@ -181,6 +181,12 @@ struct rte_security_ipsec_sa_options {
 	 * * 0: Disable per session security statistics collection for this SA.
 	 */
 	uint32_t stats : 1;
+
+	/** Reserved bit fields for future extension
+	 *
+	 * Note: reduce number of bits in reserved_opts for every new option
+	 */
+	uint32_t reserved_opts : 24;
 };
 
 /** IPSec security association direction */
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements
  2021-07-31 18:13  8% [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements Akhil Goyal
  2021-07-31 18:13  3% ` [dpdk-dev] [PATCH 1/4] cryptodev: remove LIST_END enumerators Akhil Goyal
  2021-07-31 18:13  3% ` [dpdk-dev] [PATCH 4/4] security: add reserved bitfields Akhil Goyal
@ 2021-07-31 18:17  4% ` Akhil Goyal
  2 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-07-31 18:17 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: thomas, david.marchand, hemant.agrawal, Anoob Joseph,
	pablo.de.lara.guarch, fiona.trahe, declan.doherty, matan,
	g.singh, roy.fan.zhang, jianjay.zhou, asomalap, ruifeng.wang

> Subject: [PATCH 0/4] cryptodev and security ABI improvements
> 
> This is a first series planned for ABI improvements
> in cryptodev and security library.
> 
> Other planned improvements under development.
> - cryptodev: export driver interface as internal
> - cryptodev: split and hide struct rte_cryptodev, struct
> rte_cryptodev_data
> - cryptodev: hide struct rte_cryptodev_sym_session,
> rte_cryptodev_asym_session
> - security: hide struct rte_security_session
> 
> Request everyone to review and contribute for the missing
> pieces to improve ABI stability.
> 
Forgot to mention, this is an RFC series for DPDK 21.11

> Akhil Goyal (4):
>   cryptodev: remove LIST_END enumerators
>   cryptodev: promote asym APIs to stable
>   security: hide internal API
>   security: add reserved bitfields
> 
>  app/test/test_cryptodev_asym.c     |  4 ++--
>  devtools/libabigail.abignore       |  4 ++++
>  drivers/crypto/qat/qat_asym.c      |  2 +-
>  lib/cryptodev/rte_crypto_asym.h    |  4 ----
>  lib/cryptodev/rte_cryptodev.h      | 10 ----------
>  lib/cryptodev/version.map          | 24 +++++++++++++-----------
>  lib/security/rte_security.h        |  6 ++++++
>  lib/security/rte_security_driver.h |  2 +-
>  lib/security/version.map           |  7 ++++++-
>  9 files changed, 33 insertions(+), 30 deletions(-)
> 
> --
> 2.25.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
  2021-07-27  8:44  0%       ` Bruce Richardson
  2021-07-28 15:32  0%         ` Andrew Rybchenko
@ 2021-07-31 20:44  0%         ` Thomas Monjalon
  2021-08-03  1:52  0%           ` Xia, Chenbo
  1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-07-31 20:44 UTC (permalink / raw)
  To: Xia, Chenbo
  Cc: dev, Yigit, Ferruh, dev, mdr, david.marchand, Bruce Richardson,
	andrew.rybchenko, konstantin.ananyev

27/07/2021 10:44, Bruce Richardson:
> On Mon, Jul 26, 2021 at 05:56:17AM +0000, Xia, Chenbo wrote:
> > From: Yigit, Ferruh <ferruh.yigit@intel.com>
> > > On 7/23/2021 8:39 AM, Xia, Chenbo wrote:
> > > > From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
> > > >> +* pci: To reduce unnecessary ABIs exposed by DPDK bus driver,
> > > "rte_bus_pci.h"
> > > >> +  will be made internal in 21.11 and macros/data structures/functions
> > > defined
> > > >> +  in the header will not be considered as ABI anymore. This change is
> > > >> inspired
> > > >> +  by the RFC
> > > https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
> > > >
> > > > I see there's some ABI improvement work on-going and I think it could be
> > > part of
> > > > the work. If it makes sense to you, I'd like some ACKs.
> > > >
> > > 
> > > Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > > 
> > > I am for reducing the public ABI as much as possible. How big will the
> > > change
> > > be? Is the 'rte_bus_pci.h' used other than './drivers/bus/pci/'?
> > 
> > I don't see big change here. And I am not sure if I understand your second
> > question. The rte_bus_pci.h will still be used by drivers (maybe remove the
> > rte prefix and change the file name).
> > 
> The file itself will still be exported in some cases, where the end-user
> has their own drivers which need to be compiled, so I'd recommend keeping
> the rte_ prefix. However, I think making all bus APIs internal-only to DPDK
> is a good idea.

I don't understand how it can exported _and_ internal.
And about the rte_ prefix, it should be kept even if it used only
in internal drivers because it prevent from namespace clash with other
libraries included by the drivers.
As a rule we should always have rte_ prefix for each symbol used outside
of its own library.

That said I am OK with the direction of hiding PCI bus API.

Applied, thanks.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-07-29  4:13  0%             ` Xueming(Steven) Li
@ 2021-08-01  8:40  0%               ` Andrew Rybchenko
  2021-08-01 14:25  0%                 ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-08-01  8:40 UTC (permalink / raw)
  To: Xueming(Steven) Li, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable

On 7/29/21 7:13 AM, Xueming(Steven) Li wrote:
> 
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Tuesday, July 20, 2021 5:00 PM
>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang
>> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
>> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
>>
>> On 7/19/21 3:50 PM, Xueming(Steven) Li wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Monday, July 19, 2021 8:36 PM
>>>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde
>>>> <ajit.khaparde@broadcom.com>; Somnath Kotur
>>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong
>>>> Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>;
>>>> Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>;
>>>> Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
>>>> Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
>>>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
>>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>>>> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
>>>>
>>>> On 7/19/21 2:54 PM, Xueming(Steven) Li wrote:
>>>>>
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>> Sent: Monday, July 19, 2021 4:46 PM
>>>>>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde
>>>>>> <ajit.khaparde@broadcom.com>; Somnath Kotur
>>>>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>;
>>>>>> Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
>>>>>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi
>>>>>> Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>;
>>>>>> Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
>>>>>> Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
>>>>>> Monjalon <thomas@monjalon.net>; Ferruh Yigit
>>>>>> <ferruh.yigit@intel.com>
>>>>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
>>>>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>>>>>> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
>>>>>>
>>>>>> On 7/19/21 9:58 AM, Xueming(Steven) Li wrote:
>>>>>>>
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>>>> Sent: Tuesday, July 13, 2021 12:18 AM
>>>>>>>> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
>>>>>>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>;
>>>>>>>> Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
>>>>>>>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi
>>>>>>>> Zhang <qi.z.zhang@intel.com>; Haiyue Wang
>>>>>>>> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf
>>>>>>>> Shuler <shahafs@nvidia.com>; Slava Ovsiienko
>>>>>>>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
>>>>>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
>>>>>>>> Xueming(Steven) Li <xuemingl@nvidia.com>
>>>>>>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
>>>>>>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>>>>>>>> Subject: [PATCH] ethdev: fix representor port ID search by name
>>>>>>>>
>>>>>>>> From: Viacheslav Galaktionov
>>>>>>>> <viacheslav.galaktionov@oktetlabs.ru>
>>>>>>>>
>>>>>>>> Fix representor port ID search by name if the representor itself
>>>>>>>> does not provide representors info. Getting a list of
>>>>>>>> representors from a representor does not make sense. Instead, a
>>>>>>>> parent device
>>>>>> should be used.
>>>>>>>>
>>>>>>>> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
>>>>>>>>
>>>>>>>> Fixes: df7547a6a2cc ("ethdev: add helper function to get
>>>>>>>> representor
>>>>>>>> ID")
>>>>>>>> Cc: stable@dpdk.org
>>>>>>>>
>>>>>>>> Signed-off-by: Viacheslav Galaktionov
>>>>>>>> <viacheslav.galaktionov@oktetlabs.ru>
>>>>>>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>>>> ---
>>>>>>>> The new field is added into the hole in rte_eth_dev_data structure.
>>>>>>>> The patch does not change ABI, but extra care is required since
>>>>>>>> ABI check is disabled for the structure because of the libabigail
>>>>>>>> bug
>>>>>> [1].
>>>>>>>>
>>>>>>>> Potentially it is bad for out-of-tree drivers which implement
>>>>>>>> representors but do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we care?
>>>>>>>>
>>>>>>>> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
>>>>>>>>
>>>>>>>> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
>>>>>>>>
>>>>>>>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
>>>>>>
>>>>>> [snip]
>>>>>>
>>>>>>>> --- a/lib/ethdev/ethdev_driver.h
>>>>>>>> +++ b/lib/ethdev/ethdev_driver.h
>>>>>>>> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
>>>>>>>>       * For backward compatibility, if no representor info, direct
>>>>>>>>       * map legacy VF (no controller and pf).
>>>>>>>>       *
>>>>>>>> - * @param ethdev
>>>>>>>> - *  Handle of ethdev port.
>>>>>>>> + * @param parent_port_id
>>>>>>>> + *  Port ID of the backing device.
>>>>>>>>       * @param type
>>>>>>>>       *  Representor type.
>>>>>>>>       * @param controller
>>>>>>>> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
>>>>>>>>       */
>>>>>>>>      __rte_internal
>>>>>>>>      int
>>>>>>>> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>>>>>>>> +rte_eth_representor_id_get(uint16_t parent_port_id,
>>>>>>>
>>>>>>> It make more sense to get representor info from parent port.
>>>>>>> Representor is a member of switch domain, PMD owns the information
>>>>>>> of the representor owner port and info of representors. This
>>>>>>> change looks better, but not sure whether it valuable to introduce
>>>>>>> a new
>>>>>> member to the EAL data structure.
>>>>>>
>>>>>> IMHO, it is simply incorrect to return representors info on a
>>>>>> representor itself. Representor info is an information which representors may be populated using the device.
>>>>>>
>>>>>> If above statement is correct, we need a way to get parent device
>>>>>> by representor to do name to representor ID mapping. I see two options to do it:
>>>>>>      A. Dedicated field in rte_eth_dev_data as the patch does.
>>>>>>      B. Dedicated ethdev op (since representor knows parent port ID anyway).
>>>>>> We have chosen (A) because of simplicity.
>>>>>
>>>>> Just recalled that representor port could be probed w/o owner PF, is a force for parent port?
>>>>
>>>> I thought that it is impossible and parent port is absolutely
>>>> required for a representor. Could you provide an example and explain how will it work?
>>>
>>> In case of bonding, PF0 and PF1 become one PF port `bond0`, PCI address is PF0.
>>> 	-a <PF0>,representor=pf[0-1]vf[0-99] // this is the syntax we proposed.
>>
>> Is it net/bonding or vendor-specific bonding in HW?
>> If I remember correctly in the case of net/bonding we have ethdev ports for bonded devices.
> 
> Not net/bonding pmd, it's Linux bonding, supported by hw driver.

Got it.

>>
>>>
>>> To be backward compatible, also support the following 2 devargs:
>>> 	-a <pf0>,representor=[0-99] // probe bond0 and representor on pf0
>>> 	-a <pf1>,representor=[0-99] // probe representors on pf1.
>>> If devargs start with PF1 devargs, no owner PF1 created as it disabled
>>> in bonding. Can't create bond0(PF0) automatically here as device is located by PCI address(PF1) from devargs.
>>
>> So, I guess the problem is vendor-specific bonding in HW. Anyway legacy backward compatible representor spec should not require
>> representors info since it worked before without it. So, it does not sound like a reason to have representors info on a representor
>> itself.
> 
> Legacy backward logic could be something like this: if PF owner port found, use it, fallback to current representor.
> This won't break anything I guess, how do you think? 

Logically even in legacy backward compatibility PF1 VFs representors
have parent port ID - PF0 which is a bond of PF0 & PF1. So,
parent_port_id should be filled in. In this case eth_representor_cmp()
will do rte_eth_representor_id_get(PF0-bond-id, -1, -1, VF, &id) which
will return PF0 VF representor ID. Most likely it will even match and
everything works, but it is still incorrect.

In fact, I have another idea. Try to do:
rte_eth_representor_id_get(representor-port-id, ...) first
for the backward compatibility case and, if not supported, do
it on parent port ID.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-07-29  4:20  0% ` Xueming(Steven) Li
@ 2021-08-01  8:50  0%   ` Andrew Rybchenko
  2021-08-01 14:15  0%     ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-08-01  8:50 UTC (permalink / raw)
  To: Xueming(Steven) Li, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable

On 7/29/21 7:20 AM, Xueming(Steven) Li wrote:
> 
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Tuesday, July 13, 2021 12:18 AM
>> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur <somnath.kotur@broadcom.com>; John Daley
>> <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>; Qiming Yang
>> <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad
>> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
>> Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Xueming(Steven) Li <xuemingl@nvidia.com>
>> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
>> Subject: [PATCH] ethdev: fix representor port ID search by name
>>
>> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>>
>> Fix representor port ID search by name if the representor itself does not provide representors info. Getting a list of representors from
>> a representor does not make sense. Instead, a parent device should be used.
>>
>> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
>>
>> Fixes: df7547a6a2cc ("ethdev: add helper function to get representor ID")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> ---
>> The new field is added into the hole in rte_eth_dev_data structure.
>> The patch does not change ABI, but extra care is required since ABI check is disabled for the structure because of the libabigail bug [1].
>>
>> Potentially it is bad for out-of-tree drivers which implement representors but do not fill in a new parert_port_id field in
>> rte_eth_dev_data structure. Do we care?
>>
>> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
>>
>> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
>>
>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060

[snip]

>> b/drivers/net/mlx5/linux/mlx5_os.c
>> index be22d9cbd2..5550d30628 100644
>> --- a/drivers/net/mlx5/linux/mlx5_os.c
>> +++ b/drivers/net/mlx5/linux/mlx5_os.c
>> @@ -1511,6 +1511,17 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>>   	if (priv->representor) {
>>   		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>>   		eth_dev->data->representor_id = priv->representor_id;
>> +		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
>> +			const struct mlx5_priv *opriv =
>> +				rte_eth_devices[port_id].data->dev_private;
>> +
>> +			if (!opriv ||
>> +			    opriv->sh != priv->sh ||
>> +			    opriv->representor)
>> +				continue;
>> +			eth_dev->data->parent_port_id = port_id;
>> +			break;
>> +		}
> 
> At line 126, there is a logic that locate priv->domain_id, parent port_id could be found there.

Do you mean line 1260? The comment above says "Look for sibling devices 
in order to reuse their switch domain if any, otherwise allocate one.".
So, it is not a parent. Is the comment misleading and parent matches
the search criteria as well? But in any case, we should guarantee that
it is a parent port, not a sibling port. So, we need extra criteria to
match parent port only.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
  @ 2021-08-01 12:03  3%   ` Andrew Rybchenko
  2021-08-01 12:23  0%     ` Ori Kam
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-08-01 12:03 UTC (permalink / raw)
  To: Eli Britstein, Thomas Monjalon, Ferruh Yigit, Ori Kam
  Cc: dev, Ilya Maximets, Ajit Khaparde, Matan Azrad, Ivan Malov,
	Viacheslav Galaktionov

On 8/1/21 1:57 PM, Eli Britstein wrote:
> 
> On 8/1/2021 1:22 PM, Andrew Rybchenko wrote:
>> External email: Use caution opening links or attachments
>>
>>
>> By its very name, action PORT_ID means that packets hit an ethdev with 
>> the
>> given DPDK port ID. At least the current comments don't state the 
>> opposite.
>> That said, since port representors had been adopted, applications like 
>> OvS
>> have been misusing the action. They misread its purpose as sending 
>> packets
>> to the opposite end of the "wire" plugged to the given ethdev, for 
>> example,
>> redirecting packets to the VF itself rather than to its representor 
>> ethdev.
>> Another example: OvS relies on this action with the admin PF's ethdev 
>> port
>> ID specified in it in order to send offloaded packets to the physical 
>> port.
>>
>> Since there might be applications which use this action in its valid 
>> sense,
>> one can't just change the documentation to greenlight the opposite 
>> meaning.
>>
>> The documentation must be clarified and rte_flow_action_port_id structure
>> should be extended to support both meanings.
> 
> I think the only clarification needed is that PORT_ID acts as if 
> rte_eth_tx_burst is called with the specified port-id.

Sorry, but I still think that it is opposite meaning to the current
documentation which says "Directs matching traffic to a given DPDK port 
ID." Since it happens on switching level (transfer rule) "to a given
DPDK port" means that it will be received on a given DPDK port.

Anyway, the goal of the deprecation notice is to highlight that it must
be fixed and ensure that we can choose right decision even if it
breaks API/ABI.

> Regarding representors, it's not different. When using TX on a 
> representor port, the packets appear as RX on its represented port.
> 
> Please elaborate if there is a use case for the PORT_ID~ in which the 
> app can get the packets using rte_eth_rx_burst on the specified port-id.

Multi-home host with a NIC with two physical ports and two PFs used
by DPDK app with layer 3 (IP addresses). Different cores used to handle
traffic from different ports plus routing in DPDK app. If traffic to
port #0 IP address is received on phys port #1, it is useful to redirect
traffic to port ID 0 directly to have these packets on correct CPU cores
from the very beginning to avoid SW mechanisms to pass from port #1 CPU
cores to port #0 CPU cores.

>>
>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> ---
>>   doc/guides/rel_notes/deprecation.rst | 5 +++++
>>   1 file changed, 5 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst 
>> b/doc/guides/rel_notes/deprecation.rst
>> index d9c0e65921..6e6413c89f 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -158,3 +158,8 @@ Deprecation Notices
>>   * security: The functions ``rte_security_set_pkt_metadata`` and
>>     ``rte_security_get_userdata`` will be made inline functions and 
>> additional
>>     flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
>> +
>> +* ethdev: Definition of the flow API action PORT_ID is ambiguous and 
>> needs
>> +  clarification. Structure rte_flow_action_port_id will be extended to
>> +  specify traffic direction to represented entity or ethdev port 
>> itself in
>> +  DPDK 21.11.
>> -- 
>> 2.30.2
>>


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
  2021-08-01 12:03  3%   ` Andrew Rybchenko
@ 2021-08-01 12:23  0%     ` Ori Kam
  2021-08-01 12:43  0%       ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2021-08-01 12:23 UTC (permalink / raw)
  To: Andrew Rybchenko, Eli Britstein, NBU-Contact-Thomas Monjalon,
	Ferruh Yigit
  Cc: dev, Ilya Maximets, Ajit Khaparde, Matan Azrad, Ivan Malov,
	Viacheslav Galaktionov

Hi Andrew,

I think before we can change the API we must agree on the meaning of representor.

PSB more comments

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Sunday, August 1, 2021 3:04 PM
> To: Eli Britstein <elibr@nvidia.com>; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Ori Kam
> <orika@nvidia.com>
> Cc: dev@dpdk.org; Ilya Maximets <i.maximets@ovn.org>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; Matan Azrad <matan@nvidia.com>; Ivan
> Malov <ivan.malov@oktetlabs.ru>; Viacheslav Galaktionov
> <viacheslav.galaktionov@oktetlabs.ru>
> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
> 
> On 8/1/21 1:57 PM, Eli Britstein wrote:
> >
> > On 8/1/2021 1:22 PM, Andrew Rybchenko wrote:
> >> External email: Use caution opening links or attachments
> >>
> >>
> >> By its very name, action PORT_ID means that packets hit an ethdev
> >> with the given DPDK port ID. At least the current comments don't
> >> state the opposite.
> >> That said, since port representors had been adopted, applications
> >> like OvS have been misusing the action. They misread its purpose as
> >> sending packets to the opposite end of the "wire" plugged to the
> >> given ethdev, for example, redirecting packets to the VF itself
> >> rather than to its representor ethdev.
> >> Another example: OvS relies on this action with the admin PF's ethdev
> >> port ID specified in it in order to send offloaded packets to the
> >> physical port.
> >>
> >> Since there might be applications which use this action in its valid
> >> sense, one can't just change the documentation to greenlight the
> >> opposite meaning.
> >>
> >> The documentation must be clarified and rte_flow_action_port_id
> >> structure should be extended to support both meanings.
> >
> > I think the only clarification needed is that PORT_ID acts as if
> > rte_eth_tx_burst is called with the specified port-id.
> 
> Sorry, but I still think that it is opposite meaning to the current
> documentation which says "Directs matching traffic to a given DPDK port ID."
> Since it happens on switching level (transfer rule) "to a given DPDK port"
> means that it will be received on a given DPDK port.
> 
> Anyway, the goal of the deprecation notice is to highlight that it must be
> fixed and ensure that we can choose right decision even if it breaks API/ABI.
> 
Agree, it is good that you created the announcement.
I think we should continue our discussion on what is a representor.
I think for current implementation the doc should say "direct / matches
traffic to / from the switch port which the selected DPDK representor port
is connected to or to DPDK port if this port is not a representor."
If we go this way there is no need to change the API only the doc.

> > Regarding representors, it's not different. When using TX on a
> > representor port, the packets appear as RX on its represented port.
> >
> > Please elaborate if there is a use case for the PORT_ID~ in which the
> > app can get the packets using rte_eth_rx_burst on the specified port-id.
> 
> Multi-home host with a NIC with two physical ports and two PFs used by
> DPDK app with layer 3 (IP addresses). Different cores used to handle traffic
> from different ports plus routing in DPDK app. If traffic to port #0 IP address
> is received on phys port #1, it is useful to redirect traffic to port ID 0 directly
> to have these packets on correct CPU cores from the very beginning to avoid
> SW mechanisms to pass from port #1 CPU cores to port #0 CPU cores.
> 
To make sure I understand you are talking about a DPDK application that
is connected to number of ports and it is Eswitch manager, but it doesn't use
representors but the actual ports, right?
I think the definition I wrote above also works for this case.


Best,
Ori

> >>
> >> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> ---
> >>   doc/guides/rel_notes/deprecation.rst | 5 +++++
> >>   1 file changed, 5 insertions(+)
> >>
> >> diff --git a/doc/guides/rel_notes/deprecation.rst
> >> b/doc/guides/rel_notes/deprecation.rst
> >> index d9c0e65921..6e6413c89f 100644
> >> --- a/doc/guides/rel_notes/deprecation.rst
> >> +++ b/doc/guides/rel_notes/deprecation.rst
> >> @@ -158,3 +158,8 @@ Deprecation Notices
> >>   * security: The functions ``rte_security_set_pkt_metadata`` and
> >>     ``rte_security_get_userdata`` will be made inline functions and
> >> additional
> >>     flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
> >> +
> >> +* ethdev: Definition of the flow API action PORT_ID is ambiguous and
> >> needs
> >> +  clarification. Structure rte_flow_action_port_id will be extended
> >> +to
> >> +  specify traffic direction to represented entity or ethdev port
> >> itself in
> >> +  DPDK 21.11.
> >> --
> >> 2.30.2
> >>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
  2021-08-01 12:23  0%     ` Ori Kam
@ 2021-08-01 12:43  0%       ` Andrew Rybchenko
  2021-08-01 12:56  0%         ` Ori Kam
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-08-01 12:43 UTC (permalink / raw)
  To: Ori Kam, Eli Britstein, NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Ilya Maximets, Ajit Khaparde, Matan Azrad, Ivan Malov,
	Viacheslav Galaktionov

Hi Ori,

On 8/1/21 3:23 PM, Ori Kam wrote:
> Hi Andrew,
> 
> I think before we can change the API we must agree on the meaning of representor.

The question is not directly related to a representor definition.
Just indirectly. PORT_ID action makes sense for non-representor
ports as well.

> PSB more comments
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Sunday, August 1, 2021 3:04 PM
>> To: Eli Britstein <elibr@nvidia.com>; NBU-Contact-Thomas Monjalon
>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Ori Kam
>> <orika@nvidia.com>
>> Cc: dev@dpdk.org; Ilya Maximets <i.maximets@ovn.org>; Ajit Khaparde
>> <ajit.khaparde@broadcom.com>; Matan Azrad <matan@nvidia.com>; Ivan
>> Malov <ivan.malov@oktetlabs.ru>; Viacheslav Galaktionov
>> <viacheslav.galaktionov@oktetlabs.ru>
>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
>>
>> On 8/1/21 1:57 PM, Eli Britstein wrote:
>>>
>>> On 8/1/2021 1:22 PM, Andrew Rybchenko wrote:
>>>> External email: Use caution opening links or attachments
>>>>
>>>>
>>>> By its very name, action PORT_ID means that packets hit an ethdev
>>>> with the given DPDK port ID. At least the current comments don't
>>>> state the opposite.
>>>> That said, since port representors had been adopted, applications
>>>> like OvS have been misusing the action. They misread its purpose as
>>>> sending packets to the opposite end of the "wire" plugged to the
>>>> given ethdev, for example, redirecting packets to the VF itself
>>>> rather than to its representor ethdev.
>>>> Another example: OvS relies on this action with the admin PF's ethdev
>>>> port ID specified in it in order to send offloaded packets to the
>>>> physical port.
>>>>
>>>> Since there might be applications which use this action in its valid
>>>> sense, one can't just change the documentation to greenlight the
>>>> opposite meaning.
>>>>
>>>> The documentation must be clarified and rte_flow_action_port_id
>>>> structure should be extended to support both meanings.
>>>
>>> I think the only clarification needed is that PORT_ID acts as if
>>> rte_eth_tx_burst is called with the specified port-id.
>>
>> Sorry, but I still think that it is opposite meaning to the current
>> documentation which says "Directs matching traffic to a given DPDK port ID."
>> Since it happens on switching level (transfer rule) "to a given DPDK port"
>> means that it will be received on a given DPDK port.
>>
>> Anyway, the goal of the deprecation notice is to highlight that it must be
>> fixed and ensure that we can choose right decision even if it breaks API/ABI.
>>
> Agree, it is good that you created the announcement.

Hopefully you agree that the area requires clarification and must
be improved. I think so hot discussions really prove it.

> I think we should continue our discussion on what is a representor.

Yes, but it is a hard topic. I'd like to unbind PORT_ID action from
the discussion, since the action makes sense for non-representors
as well.

> I think for current implementation the doc should say "direct / matches
> traffic to / from the switch port which the selected DPDK representor port
> is connected to or to DPDK port if this port is not a representor."

IMHO it is better to keep the definition of the action simple and
do not have any representor specifics in it. Representor is an ethdev
port. If we direct traffic to an ethdev port, it should be received
on the ethdev port regardless if it is a representor or not.
It is better to avoid exceptions and special cases.

> If we go this way there is no need to change the API only the doc.
> 
>>> Regarding representors, it's not different. When using TX on a
>>> representor port, the packets appear as RX on its represented port.
>>>
>>> Please elaborate if there is a use case for the PORT_ID~ in which the
>>> app can get the packets using rte_eth_rx_burst on the specified port-id.
>>
>> Multi-home host with a NIC with two physical ports and two PFs used by
>> DPDK app with layer 3 (IP addresses). Different cores used to handle traffic
>> from different ports plus routing in DPDK app. If traffic to port #0 IP address
>> is received on phys port #1, it is useful to redirect traffic to port ID 0 directly
>> to have these packets on correct CPU cores from the very beginning to avoid
>> SW mechanisms to pass from port #1 CPU cores to port #0 CPU cores.
>>
> To make sure I understand you are talking about a DPDK application that
> is connected to number of ports and it is Eswitch manager, but it doesn't use
> representors but the actual ports, right?
> I think the definition I wrote above also works for this case.

Other possible request is to direct traffic from phys port #0
to phys port #1 directly and say it in terms of PORT_ID action.

Thanks,
Andrew.

> Best,
> Ori
> 
>>>>
>>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> ---
>>>>    doc/guides/rel_notes/deprecation.rst | 5 +++++
>>>>    1 file changed, 5 insertions(+)
>>>>
>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>> b/doc/guides/rel_notes/deprecation.rst
>>>> index d9c0e65921..6e6413c89f 100644
>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>> @@ -158,3 +158,8 @@ Deprecation Notices
>>>>    * security: The functions ``rte_security_set_pkt_metadata`` and
>>>>      ``rte_security_get_userdata`` will be made inline functions and
>>>> additional
>>>>      flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
>>>> +
>>>> +* ethdev: Definition of the flow API action PORT_ID is ambiguous and
>>>> needs
>>>> +  clarification. Structure rte_flow_action_port_id will be extended
>>>> +to
>>>> +  specify traffic direction to represented entity or ethdev port
>>>> itself in
>>>> +  DPDK 21.11.
>>>> --
>>>> 2.30.2
>>>>
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
  2021-08-01 12:43  0%       ` Andrew Rybchenko
@ 2021-08-01 12:56  0%         ` Ori Kam
  2021-08-01 13:23  0%           ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2021-08-01 12:56 UTC (permalink / raw)
  To: Andrew Rybchenko, Eli Britstein, NBU-Contact-Thomas Monjalon,
	Ferruh Yigit
  Cc: dev, Ilya Maximets, Ajit Khaparde, Matan Azrad, Ivan Malov,
	Viacheslav Galaktionov

Hi Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Sunday, August 1, 2021 3:44 PM
> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
> 
> Hi Ori,
> 
> On 8/1/21 3:23 PM, Ori Kam wrote:
> > Hi Andrew,
> >
> > I think before we can change the API we must agree on the meaning of
> representor.
> 
> The question is not directly related to a representor definition.
> Just indirectly. PORT_ID action makes sense for non-representor ports as
> well.
> 
> > PSB more comments
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Sunday, August 1, 2021 3:04 PM
> >> To: Eli Britstein <elibr@nvidia.com>; NBU-Contact-Thomas Monjalon
> >> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Ori Kam
> >> <orika@nvidia.com>
> >> Cc: dev@dpdk.org; Ilya Maximets <i.maximets@ovn.org>; Ajit Khaparde
> >> <ajit.khaparde@broadcom.com>; Matan Azrad <matan@nvidia.com>;
> Ivan
> >> Malov <ivan.malov@oktetlabs.ru>; Viacheslav Galaktionov
> >> <viacheslav.galaktionov@oktetlabs.ru>
> >> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
> >> changes
> >>
> >> On 8/1/21 1:57 PM, Eli Britstein wrote:
> >>>
> >>> On 8/1/2021 1:22 PM, Andrew Rybchenko wrote:
> >>>> External email: Use caution opening links or attachments
> >>>>
> >>>>
> >>>> By its very name, action PORT_ID means that packets hit an ethdev
> >>>> with the given DPDK port ID. At least the current comments don't
> >>>> state the opposite.
> >>>> That said, since port representors had been adopted, applications
> >>>> like OvS have been misusing the action. They misread its purpose as
> >>>> sending packets to the opposite end of the "wire" plugged to the
> >>>> given ethdev, for example, redirecting packets to the VF itself
> >>>> rather than to its representor ethdev.
> >>>> Another example: OvS relies on this action with the admin PF's
> >>>> ethdev port ID specified in it in order to send offloaded packets
> >>>> to the physical port.
> >>>>
> >>>> Since there might be applications which use this action in its
> >>>> valid sense, one can't just change the documentation to greenlight
> >>>> the opposite meaning.
> >>>>
> >>>> The documentation must be clarified and rte_flow_action_port_id
> >>>> structure should be extended to support both meanings.
> >>>
> >>> I think the only clarification needed is that PORT_ID acts as if
> >>> rte_eth_tx_burst is called with the specified port-id.
> >>
> >> Sorry, but I still think that it is opposite meaning to the current
> >> documentation which says "Directs matching traffic to a given DPDK port
> ID."
> >> Since it happens on switching level (transfer rule) "to a given DPDK port"
> >> means that it will be received on a given DPDK port.
> >>
> >> Anyway, the goal of the deprecation notice is to highlight that it
> >> must be fixed and ensure that we can choose right decision even if it
> breaks API/ABI.
> >>
> > Agree, it is good that you created the announcement.
> 
> Hopefully you agree that the area requires clarification and must be
> improved. I think so hot discussions really prove it.
> 
+1

> > I think we should continue our discussion on what is a representor.
> 
> Yes, but it is a hard topic. I'd like to unbind PORT_ID action from the
> discussion, since the action makes sense for non-representors as well.
> 
If this can be done great, I'm for it, but I'm not sure it can be, but let's try.

> > I think for current implementation the doc should say "direct /
> > matches traffic to / from the switch port which the selected DPDK
> > representor port is connected to or to DPDK port if this port is not a
> representor."
> 
> IMHO it is better to keep the definition of the action simple and do not have
> any representor specifics in it. Representor is an ethdev port. If we direct
> traffic to an ethdev port, it should be received on the ethdev port regardless
> if it is a representor or not.
> It is better to avoid exceptions and special cases.
> 

Lets see if I understand correctly, you suggest that port  action / item will be
for DPDK port, unless they are marked with some bit which means that
the traffic should be routed to the switch port which the DPDK port represent
am I correct?

> > If we go this way there is no need to change the API only the doc.
> >
> >>> Regarding representors, it's not different. When using TX on a
> >>> representor port, the packets appear as RX on its represented port.
> >>>
> >>> Please elaborate if there is a use case for the PORT_ID~ in which
> >>> the app can get the packets using rte_eth_rx_burst on the specified
> port-id.
> >>
> >> Multi-home host with a NIC with two physical ports and two PFs used
> >> by DPDK app with layer 3 (IP addresses). Different cores used to
> >> handle traffic from different ports plus routing in DPDK app. If
> >> traffic to port #0 IP address is received on phys port #1, it is
> >> useful to redirect traffic to port ID 0 directly to have these
> >> packets on correct CPU cores from the very beginning to avoid SW
> mechanisms to pass from port #1 CPU cores to port #0 CPU cores.
> >>
> > To make sure I understand you are talking about a DPDK application
> > that is connected to number of ports and it is Eswitch manager, but it
> > doesn't use representors but the actual ports, right?
> > I think the definition I wrote above also works for this case.
> 
> Other possible request is to direct traffic from phys port #0 to phys port #1
> directly and say it in terms of PORT_ID action.
> 
But we are talking using the switch layer(transfer mode) right?

Best,
Ori
> Thanks,
> Andrew.
> 
> > Best,
> > Ori
> >
> >>>>
> >>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> ---
> >>>>    doc/guides/rel_notes/deprecation.rst | 5 +++++
> >>>>    1 file changed, 5 insertions(+)
> >>>>
> >>>> diff --git a/doc/guides/rel_notes/deprecation.rst
> >>>> b/doc/guides/rel_notes/deprecation.rst
> >>>> index d9c0e65921..6e6413c89f 100644
> >>>> --- a/doc/guides/rel_notes/deprecation.rst
> >>>> +++ b/doc/guides/rel_notes/deprecation.rst
> >>>> @@ -158,3 +158,8 @@ Deprecation Notices
> >>>>    * security: The functions ``rte_security_set_pkt_metadata`` and
> >>>>      ``rte_security_get_userdata`` will be made inline functions
> >>>> and additional
> >>>>      flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
> >>>> +
> >>>> +* ethdev: Definition of the flow API action PORT_ID is ambiguous
> >>>> +and
> >>>> needs
> >>>> +  clarification. Structure rte_flow_action_port_id will be
> >>>> +extended to
> >>>> +  specify traffic direction to represented entity or ethdev port
> >>>> itself in
> >>>> +  DPDK 21.11.
> >>>> --
> >>>> 2.30.2
> >>>>
> >


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
  2021-08-01 12:56  0%         ` Ori Kam
@ 2021-08-01 13:23  0%           ` Andrew Rybchenko
  2021-08-01 16:13  0%             ` Ori Kam
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-08-01 13:23 UTC (permalink / raw)
  To: Ori Kam, Eli Britstein, NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Ilya Maximets, Ajit Khaparde, Matan Azrad, Ivan Malov,
	Viacheslav Galaktionov

On 8/1/21 3:56 PM, Ori Kam wrote:
> Hi Andrew,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Sunday, August 1, 2021 3:44 PM
>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
>>
>> Hi Ori,
>>
>> On 8/1/21 3:23 PM, Ori Kam wrote:
>>> Hi Andrew,
>>>
>>> I think before we can change the API we must agree on the meaning of
>> representor.
>>
>> The question is not directly related to a representor definition.
>> Just indirectly. PORT_ID action makes sense for non-representor ports as
>> well.
>>
>>> PSB more comments
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Sunday, August 1, 2021 3:04 PM
>>>> To: Eli Britstein <elibr@nvidia.com>; NBU-Contact-Thomas Monjalon
>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Ori Kam
>>>> <orika@nvidia.com>
>>>> Cc: dev@dpdk.org; Ilya Maximets <i.maximets@ovn.org>; Ajit Khaparde
>>>> <ajit.khaparde@broadcom.com>; Matan Azrad <matan@nvidia.com>;
>> Ivan
>>>> Malov <ivan.malov@oktetlabs.ru>; Viacheslav Galaktionov
>>>> <viacheslav.galaktionov@oktetlabs.ru>
>>>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
>>>> changes
>>>>
>>>> On 8/1/21 1:57 PM, Eli Britstein wrote:
>>>>>
>>>>> On 8/1/2021 1:22 PM, Andrew Rybchenko wrote:
>>>>>> External email: Use caution opening links or attachments
>>>>>>
>>>>>>
>>>>>> By its very name, action PORT_ID means that packets hit an ethdev
>>>>>> with the given DPDK port ID. At least the current comments don't
>>>>>> state the opposite.
>>>>>> That said, since port representors had been adopted, applications
>>>>>> like OvS have been misusing the action. They misread its purpose as
>>>>>> sending packets to the opposite end of the "wire" plugged to the
>>>>>> given ethdev, for example, redirecting packets to the VF itself
>>>>>> rather than to its representor ethdev.
>>>>>> Another example: OvS relies on this action with the admin PF's
>>>>>> ethdev port ID specified in it in order to send offloaded packets
>>>>>> to the physical port.
>>>>>>
>>>>>> Since there might be applications which use this action in its
>>>>>> valid sense, one can't just change the documentation to greenlight
>>>>>> the opposite meaning.
>>>>>>
>>>>>> The documentation must be clarified and rte_flow_action_port_id
>>>>>> structure should be extended to support both meanings.
>>>>>
>>>>> I think the only clarification needed is that PORT_ID acts as if
>>>>> rte_eth_tx_burst is called with the specified port-id.
>>>>
>>>> Sorry, but I still think that it is opposite meaning to the current
>>>> documentation which says "Directs matching traffic to a given DPDK port
>> ID."
>>>> Since it happens on switching level (transfer rule) "to a given DPDK port"
>>>> means that it will be received on a given DPDK port.
>>>>
>>>> Anyway, the goal of the deprecation notice is to highlight that it
>>>> must be fixed and ensure that we can choose right decision even if it
>> breaks API/ABI.
>>>>
>>> Agree, it is good that you created the announcement.
>>
>> Hopefully you agree that the area requires clarification and must be
>> improved. I think so hot discussions really prove it.
>>
> +1
> 
>>> I think we should continue our discussion on what is a representor.
>>
>> Yes, but it is a hard topic. I'd like to unbind PORT_ID action from the
>> discussion, since the action makes sense for non-representors as well.
>>
> If this can be done great, I'm for it, but I'm not sure it can be, but let's try.
> 
>>> I think for current implementation the doc should say "direct /
>>> matches traffic to / from the switch port which the selected DPDK
>>> representor port is connected to or to DPDK port if this port is not a
>> representor."
>>
>> IMHO it is better to keep the definition of the action simple and do not have
>> any representor specifics in it. Representor is an ethdev port. If we direct
>> traffic to an ethdev port, it should be received on the ethdev port regardless
>> if it is a representor or not.
>> It is better to avoid exceptions and special cases.
>>
> 
> Lets see if I understand correctly, you suggest that port  action / item will be
> for DPDK port, unless they are marked with some bit which means that
> the traffic should be routed to the switch port which the DPDK port represent
> am I correct?

Here I'm talking about PORT_ID action only. As for details, I've tried
to keep it out-of-scope of the deprecation notice.

However, since we are going to break something here, it is better to
break hard to be sure that every since usage is updated. So, I tend to
to solution suggested by Ilya [1] which is similar to Linux kernel.
I.e. add an enum with invalid zero value and two members to specify
direction.

[1] 
https://patches.dpdk.org/project/dpdk/patch/20210601111420.5549-1-ivan.malov@oktetlabs.ru/#133431

as for PORT_ID pattern item, I think ingress/egress attributes define
direction. If it is an ingress flow rule, PORT_ID item should match
traffic coming from represented entity in the case of port representor
and associated network port in the case of ethdev port associated with
it. In egress case it otherwise matches traffic sent using Tx burst via
corresponding ethdev port.

>>> If we go this way there is no need to change the API only the doc.
>>>
>>>>> Regarding representors, it's not different. When using TX on a
>>>>> representor port, the packets appear as RX on its represented port.
>>>>>
>>>>> Please elaborate if there is a use case for the PORT_ID~ in which
>>>>> the app can get the packets using rte_eth_rx_burst on the specified
>> port-id.
>>>>
>>>> Multi-home host with a NIC with two physical ports and two PFs used
>>>> by DPDK app with layer 3 (IP addresses). Different cores used to
>>>> handle traffic from different ports plus routing in DPDK app. If
>>>> traffic to port #0 IP address is received on phys port #1, it is
>>>> useful to redirect traffic to port ID 0 directly to have these
>>>> packets on correct CPU cores from the very beginning to avoid SW
>> mechanisms to pass from port #1 CPU cores to port #0 CPU cores.
>>>>
>>> To make sure I understand you are talking about a DPDK application
>>> that is connected to number of ports and it is Eswitch manager, but it
>>> doesn't use representors but the actual ports, right?
>>> I think the definition I wrote above also works for this case.
>>
>> Other possible request is to direct traffic from phys port #0 to phys port #1
>> directly and say it in terms of PORT_ID action.
>>
> But we are talking using the switch layer(transfer mode) right?

Yes.

> Best,
> Ori
>> Thanks,
>> Andrew.
>>
>>> Best,
>>> Ori
>>>
>>>>>>
>>>>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>> ---
>>>>>>     doc/guides/rel_notes/deprecation.rst | 5 +++++
>>>>>>     1 file changed, 5 insertions(+)
>>>>>>
>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>> index d9c0e65921..6e6413c89f 100644
>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>> @@ -158,3 +158,8 @@ Deprecation Notices
>>>>>>     * security: The functions ``rte_security_set_pkt_metadata`` and
>>>>>>       ``rte_security_get_userdata`` will be made inline functions
>>>>>> and additional
>>>>>>       flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
>>>>>> +
>>>>>> +* ethdev: Definition of the flow API action PORT_ID is ambiguous
>>>>>> +and
>>>>>> needs
>>>>>> +  clarification. Structure rte_flow_action_port_id will be
>>>>>> +extended to
>>>>>> +  specify traffic direction to represented entity or ethdev port
>>>>>> itself in
>>>>>> +  DPDK 21.11.
>>>>>> --
>>>>>> 2.30.2
>>>>>>
>>>
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-08-01  8:50  0%   ` Andrew Rybchenko
@ 2021-08-01 14:15  0%     ` Xueming(Steven) Li
  0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-08-01 14:15 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Sunday, August 1, 2021 4:50 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang
> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> 
> On 7/29/21 7:20 AM, Xueming(Steven) Li wrote:
> >
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Tuesday, July 13, 2021 12:18 AM
> >> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> >> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong
> >> Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>;
> >> Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>;
> >> Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> >> Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> >> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> >> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
> >> Xueming(Steven) Li <xuemingl@nvidia.com>
> >> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >> Subject: [PATCH] ethdev: fix representor port ID search by name
> >>
> >> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> >>
> >> Fix representor port ID search by name if the representor itself does
> >> not provide representors info. Getting a list of representors from a representor does not make sense. Instead, a parent device
> should be used.
> >>
> >> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
> >>
> >> Fixes: df7547a6a2cc ("ethdev: add helper function to get representor
> >> ID")
> >> Cc: stable@dpdk.org
> >>
> >> Signed-off-by: Viacheslav Galaktionov
> >> <viacheslav.galaktionov@oktetlabs.ru>
> >> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> ---
> >> The new field is added into the hole in rte_eth_dev_data structure.
> >> The patch does not change ABI, but extra care is required since ABI check is disabled for the structure because of the libabigail bug
> [1].
> >>
> >> Potentially it is bad for out-of-tree drivers which implement
> >> representors but do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we care?
> >>
> >> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
> >>
> >> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
> >>
> >> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> 
> [snip]
> 
> >> b/drivers/net/mlx5/linux/mlx5_os.c
> >> index be22d9cbd2..5550d30628 100644
> >> --- a/drivers/net/mlx5/linux/mlx5_os.c
> >> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> >> @@ -1511,6 +1511,17 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
> >>   	if (priv->representor) {
> >>   		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
> >>   		eth_dev->data->representor_id = priv->representor_id;
> >> +		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
> >> +			const struct mlx5_priv *opriv =
> >> +				rte_eth_devices[port_id].data->dev_private;
> >> +
> >> +			if (!opriv ||
> >> +			    opriv->sh != priv->sh ||
> >> +			    opriv->representor)
> >> +				continue;
> >> +			eth_dev->data->parent_port_id = port_id;
> >> +			break;
> >> +		}
> >
> > At line 126, there is a logic that locate priv->domain_id, parent port_id could be found there.
> 
> Do you mean line 1260? The comment above says "Look for sibling devices in order to reuse their switch domain if any, otherwise
> allocate one.".
> So, it is not a parent. Is the comment misleading and parent matches the search criteria as well? But in any case, we should guarantee
> that it is a parent port, not a sibling port. So, we need extra criteria to match parent port only.

Yes, you are correct. How about mlx5_find_master_dev()? It locate master port in same switch domain.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name
  2021-08-01  8:40  0%               ` Andrew Rybchenko
@ 2021-08-01 14:25  0%                 ` Xueming(Steven) Li
  0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-08-01 14:25 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, stable



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Sunday, August 1, 2021 4:40 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang
> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> 
> On 7/29/21 7:13 AM, Xueming(Steven) Li wrote:
> >
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Tuesday, July 20, 2021 5:00 PM
> >> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde
> >> <ajit.khaparde@broadcom.com>; Somnath Kotur
> >> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>; Hyong
> >> Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>;
> >> Qiming Yang <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>;
> >> Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>;
> >> Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> >> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> >> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> >> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> >>
> >> On 7/19/21 3:50 PM, Xueming(Steven) Li wrote:
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> Sent: Monday, July 19, 2021 8:36 PM
> >>>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde
> >>>> <ajit.khaparde@broadcom.com>; Somnath Kotur
> >>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>;
> >>>> Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> >>>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi
> >>>> Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>;
> >>>> Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
> >>>> Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
> >>>> Monjalon <thomas@monjalon.net>; Ferruh Yigit
> >>>> <ferruh.yigit@intel.com>
> >>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >>>> Subject: Re: [PATCH] ethdev: fix representor port ID search by name
> >>>>
> >>>> On 7/19/21 2:54 PM, Xueming(Steven) Li wrote:
> >>>>>
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>>>> Sent: Monday, July 19, 2021 4:46 PM
> >>>>>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Ajit Khaparde
> >>>>>> <ajit.khaparde@broadcom.com>; Somnath Kotur
> >>>>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>;
> >>>>>> Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> >>>>>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>; Qi
> >>>>>> Zhang <qi.z.zhang@intel.com>; Haiyue Wang
> >>>>>> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf
> >>>>>> Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> >>>>>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> >>>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> >>>>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >>>>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >>>>>> Subject: Re: [PATCH] ethdev: fix representor port ID search by
> >>>>>> name
> >>>>>>
> >>>>>> On 7/19/21 9:58 AM, Xueming(Steven) Li wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>>> -----Original Message-----
> >>>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>>>>>> Sent: Tuesday, July 13, 2021 12:18 AM
> >>>>>>>> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur
> >>>>>>>> <somnath.kotur@broadcom.com>; John Daley <johndale@cisco.com>;
> >>>>>>>> Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing
> >>>>>>>> <beilei.xing@intel.com>; Qiming Yang <qiming.yang@intel.com>;
> >>>>>>>> Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang
> >>>>>>>> <haiyue.wang@intel.com>; Matan Azrad <matan@nvidia.com>; Shahaf
> >>>>>>>> Shuler <shahafs@nvidia.com>; Slava Ovsiienko
> >>>>>>>> <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon
> >>>>>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
> >>>>>>>> Xueming(Steven) Li <xuemingl@nvidia.com>
> >>>>>>>> Cc: dev@dpdk.org; Viacheslav Galaktionov
> >>>>>>>> <viacheslav.galaktionov@oktetlabs.ru>; stable@dpdk.org
> >>>>>>>> Subject: [PATCH] ethdev: fix representor port ID search by name
> >>>>>>>>
> >>>>>>>> From: Viacheslav Galaktionov
> >>>>>>>> <viacheslav.galaktionov@oktetlabs.ru>
> >>>>>>>>
> >>>>>>>> Fix representor port ID search by name if the representor
> >>>>>>>> itself does not provide representors info. Getting a list of
> >>>>>>>> representors from a representor does not make sense. Instead, a
> >>>>>>>> parent device
> >>>>>> should be used.
> >>>>>>>>
> >>>>>>>> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
> >>>>>>>>
> >>>>>>>> Fixes: df7547a6a2cc ("ethdev: add helper function to get
> >>>>>>>> representor
> >>>>>>>> ID")
> >>>>>>>> Cc: stable@dpdk.org
> >>>>>>>>
> >>>>>>>> Signed-off-by: Viacheslav Galaktionov
> >>>>>>>> <viacheslav.galaktionov@oktetlabs.ru>
> >>>>>>>> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>>>>>> ---
> >>>>>>>> The new field is added into the hole in rte_eth_dev_data structure.
> >>>>>>>> The patch does not change ABI, but extra care is required since
> >>>>>>>> ABI check is disabled for the structure because of the
> >>>>>>>> libabigail bug
> >>>>>> [1].
> >>>>>>>>
> >>>>>>>> Potentially it is bad for out-of-tree drivers which implement
> >>>>>>>> representors but do not fill in a new parert_port_id field in rte_eth_dev_data structure. Do we care?
> >>>>>>>>
> >>>>>>>> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
> >>>>>>>>
> >>>>>>>> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
> >>>>>>>>
> >>>>>>>> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> >>>>>>
> >>>>>> [snip]
> >>>>>>
> >>>>>>>> --- a/lib/ethdev/ethdev_driver.h
> >>>>>>>> +++ b/lib/ethdev/ethdev_driver.h
> >>>>>>>> @@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
> >>>>>>>>       * For backward compatibility, if no representor info, direct
> >>>>>>>>       * map legacy VF (no controller and pf).
> >>>>>>>>       *
> >>>>>>>> - * @param ethdev
> >>>>>>>> - *  Handle of ethdev port.
> >>>>>>>> + * @param parent_port_id
> >>>>>>>> + *  Port ID of the backing device.
> >>>>>>>>       * @param type
> >>>>>>>>       *  Representor type.
> >>>>>>>>       * @param controller
> >>>>>>>> @@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
> >>>>>>>>       */
> >>>>>>>>      __rte_internal
> >>>>>>>>      int
> >>>>>>>> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> >>>>>>>> +rte_eth_representor_id_get(uint16_t parent_port_id,
> >>>>>>>
> >>>>>>> It make more sense to get representor info from parent port.
> >>>>>>> Representor is a member of switch domain, PMD owns the
> >>>>>>> information of the representor owner port and info of
> >>>>>>> representors. This change looks better, but not sure whether it
> >>>>>>> valuable to introduce a new
> >>>>>> member to the EAL data structure.
> >>>>>>
> >>>>>> IMHO, it is simply incorrect to return representors info on a
> >>>>>> representor itself. Representor info is an information which representors may be populated using the device.
> >>>>>>
> >>>>>> If above statement is correct, we need a way to get parent device
> >>>>>> by representor to do name to representor ID mapping. I see two options to do it:
> >>>>>>      A. Dedicated field in rte_eth_dev_data as the patch does.
> >>>>>>      B. Dedicated ethdev op (since representor knows parent port ID anyway).
> >>>>>> We have chosen (A) because of simplicity.
> >>>>>
> >>>>> Just recalled that representor port could be probed w/o owner PF, is a force for parent port?
> >>>>
> >>>> I thought that it is impossible and parent port is absolutely
> >>>> required for a representor. Could you provide an example and explain how will it work?
> >>>
> >>> In case of bonding, PF0 and PF1 become one PF port `bond0`, PCI address is PF0.
> >>> 	-a <PF0>,representor=pf[0-1]vf[0-99] // this is the syntax we proposed.
> >>
> >> Is it net/bonding or vendor-specific bonding in HW?
> >> If I remember correctly in the case of net/bonding we have ethdev ports for bonded devices.
> >
> > Not net/bonding pmd, it's Linux bonding, supported by hw driver.
> 
> Got it.
> 
> >>
> >>>
> >>> To be backward compatible, also support the following 2 devargs:
> >>> 	-a <pf0>,representor=[0-99] // probe bond0 and representor on pf0
> >>> 	-a <pf1>,representor=[0-99] // probe representors on pf1.
> >>> If devargs start with PF1 devargs, no owner PF1 created as it
> >>> disabled in bonding. Can't create bond0(PF0) automatically here as device is located by PCI address(PF1) from devargs.
> >>
> >> So, I guess the problem is vendor-specific bonding in HW. Anyway
> >> legacy backward compatible representor spec should not require
> >> representors info since it worked before without it. So, it does not sound like a reason to have representors info on a representor
> itself.
> >
> > Legacy backward logic could be something like this: if PF owner port found, use it, fallback to current representor.
> > This won't break anything I guess, how do you think?
> 
> Logically even in legacy backward compatibility PF1 VFs representors have parent port ID - PF0 which is a bond of PF0 & PF1. So,
> parent_port_id should be filled in. In this case eth_representor_cmp() will do rte_eth_representor_id_get(PF0-bond-id, -1, -1, VF, &id)
> which will return PF0 VF representor ID. Most likely it will even match and everything works, but it is still incorrect.

The PF0, bond of PF0 and PF1 will return representor info for VF/SFs under both PFs. It should work.

> 
> In fact, I have another idea. Try to do:
> rte_eth_representor_id_get(representor-port-id, ...) first for the backward compatibility case and, if not supported, do it on parent
> port ID.

Looks good to me

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
  2021-08-01 13:23  0%           ` Andrew Rybchenko
@ 2021-08-01 16:13  0%             ` Ori Kam
  2021-08-01 20:09  0%               ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2021-08-01 16:13 UTC (permalink / raw)
  To: Andrew Rybchenko, Eli Britstein, NBU-Contact-Thomas Monjalon,
	Ferruh Yigit
  Cc: dev, Ilya Maximets, Ajit Khaparde, Matan Azrad, Ivan Malov,
	Viacheslav Galaktionov

Hi  Andrew,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Sunday, August 1, 2021 4:24 PM
> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
> 
> On 8/1/21 3:56 PM, Ori Kam wrote:
> > Hi Andrew,
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Sunday, August 1, 2021 3:44 PM
> >> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
> >> changes
> >>
> >> Hi Ori,
> >>
> >> On 8/1/21 3:23 PM, Ori Kam wrote:
> >>> Hi Andrew,
> >>>
> >>> I think before we can change the API we must agree on the meaning of
> >> representor.
> >>
> >> The question is not directly related to a representor definition.
> >> Just indirectly. PORT_ID action makes sense for non-representor ports
> >> as well.
> >>
> >>> PSB more comments
> >>>
> >>>> -----Original Message-----
> >>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> Sent: Sunday, August 1, 2021 3:04 PM
> >>>> To: Eli Britstein <elibr@nvidia.com>; NBU-Contact-Thomas Monjalon
> >>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Ori
> >>>> Kam <orika@nvidia.com>
> >>>> Cc: dev@dpdk.org; Ilya Maximets <i.maximets@ovn.org>; Ajit
> Khaparde
> >>>> <ajit.khaparde@broadcom.com>; Matan Azrad <matan@nvidia.com>;
> >> Ivan
> >>>> Malov <ivan.malov@oktetlabs.ru>; Viacheslav Galaktionov
> >>>> <viacheslav.galaktionov@oktetlabs.ru>
> >>>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
> >>>> changes
> >>>>
> >>>> On 8/1/21 1:57 PM, Eli Britstein wrote:
> >>>>>
> >>>>> On 8/1/2021 1:22 PM, Andrew Rybchenko wrote:
> >>>>>> External email: Use caution opening links or attachments
> >>>>>>
> >>>>>>
> >>>>>> By its very name, action PORT_ID means that packets hit an ethdev
> >>>>>> with the given DPDK port ID. At least the current comments don't
> >>>>>> state the opposite.
> >>>>>> That said, since port representors had been adopted, applications
> >>>>>> like OvS have been misusing the action. They misread its purpose
> >>>>>> as sending packets to the opposite end of the "wire" plugged to
> >>>>>> the given ethdev, for example, redirecting packets to the VF
> >>>>>> itself rather than to its representor ethdev.
> >>>>>> Another example: OvS relies on this action with the admin PF's
> >>>>>> ethdev port ID specified in it in order to send offloaded packets
> >>>>>> to the physical port.
> >>>>>>
> >>>>>> Since there might be applications which use this action in its
> >>>>>> valid sense, one can't just change the documentation to
> >>>>>> greenlight the opposite meaning.
> >>>>>>
> >>>>>> The documentation must be clarified and rte_flow_action_port_id
> >>>>>> structure should be extended to support both meanings.
> >>>>>
> >>>>> I think the only clarification needed is that PORT_ID acts as if
> >>>>> rte_eth_tx_burst is called with the specified port-id.
> >>>>
> >>>> Sorry, but I still think that it is opposite meaning to the current
> >>>> documentation which says "Directs matching traffic to a given DPDK
> >>>> port
> >> ID."
> >>>> Since it happens on switching level (transfer rule) "to a given DPDK
> port"
> >>>> means that it will be received on a given DPDK port.
> >>>>
> >>>> Anyway, the goal of the deprecation notice is to highlight that it
> >>>> must be fixed and ensure that we can choose right decision even if
> >>>> it
> >> breaks API/ABI.
> >>>>
> >>> Agree, it is good that you created the announcement.
> >>
> >> Hopefully you agree that the area requires clarification and must be
> >> improved. I think so hot discussions really prove it.
> >>
> > +1
> >
> >>> I think we should continue our discussion on what is a representor.
> >>
> >> Yes, but it is a hard topic. I'd like to unbind PORT_ID action from
> >> the discussion, since the action makes sense for non-representors as well.
> >>
> > If this can be done great, I'm for it, but I'm not sure it can be, but let's try.
> >
> >>> I think for current implementation the doc should say "direct /
> >>> matches traffic to / from the switch port which the selected DPDK
> >>> representor port is connected to or to DPDK port if this port is not
> >>> a
> >> representor."
> >>
> >> IMHO it is better to keep the definition of the action simple and do
> >> not have any representor specifics in it. Representor is an ethdev
> >> port. If we direct traffic to an ethdev port, it should be received
> >> on the ethdev port regardless if it is a representor or not.
> >> It is better to avoid exceptions and special cases.
> >>
> >
> > Lets see if I understand correctly, you suggest that port  action /
> > item will be for DPDK port, unless they are marked with some bit which
> > means that the traffic should be routed to the switch port which the
> > DPDK port represent am I correct?
> 
> Here I'm talking about PORT_ID action only. As for details, I've tried to keep it
> out-of-scope of the deprecation notice.
> 
+1 but we need to check if we need it at all or just change doc.

> However, since we are going to break something here, it is better to break
> hard to be sure that every since usage is updated. So, I tend to to solution
> suggested by Ilya [1] which is similar to Linux kernel.
> I.e. add an enum with invalid zero value and two members to specify
> direction.
> 
> [1]
> https://patches.dpdk.org/project/dpdk/patch/20210601111420.5549-1-
> ivan.malov@oktetlabs.ru/#133431
> 
> as for PORT_ID pattern item, I think ingress/egress attributes define
> direction. If it is an ingress flow rule, PORT_ID item should match traffic
> coming from represented entity in the case of port representor and
> associated network port in the case of ethdev port associated with it. In
> egress case it otherwise matches traffic sent using Tx burst via corresponding
> ethdev port.
> 
I think that Ingress egress has only meaning when talking about NIC steering
and not E-Switch steering.
I think that we can just use original bit to mark if we want to send traffic
to DPDK port or to other port.

In any case I will be happy if we could have a meeting to discuss this
approach before sending your patch. 
I think this can save a lot of time.

Best,
Ori


> >>> If we go this way there is no need to change the API only the doc.
> >>>
> >>>>> Regarding representors, it's not different. When using TX on a
> >>>>> representor port, the packets appear as RX on its represented port.
> >>>>>
> >>>>> Please elaborate if there is a use case for the PORT_ID~ in which
> >>>>> the app can get the packets using rte_eth_rx_burst on the
> >>>>> specified
> >> port-id.
> >>>>
> >>>> Multi-home host with a NIC with two physical ports and two PFs used
> >>>> by DPDK app with layer 3 (IP addresses). Different cores used to
> >>>> handle traffic from different ports plus routing in DPDK app. If
> >>>> traffic to port #0 IP address is received on phys port #1, it is
> >>>> useful to redirect traffic to port ID 0 directly to have these
> >>>> packets on correct CPU cores from the very beginning to avoid SW
> >> mechanisms to pass from port #1 CPU cores to port #0 CPU cores.
> >>>>
> >>> To make sure I understand you are talking about a DPDK application
> >>> that is connected to number of ports and it is Eswitch manager, but
> >>> it doesn't use representors but the actual ports, right?
> >>> I think the definition I wrote above also works for this case.
> >>
> >> Other possible request is to direct traffic from phys port #0 to phys
> >> port #1 directly and say it in terms of PORT_ID action.
> >>
> > But we are talking using the switch layer(transfer mode) right?
> 
> Yes.
> 
> > Best,
> > Ori
> >> Thanks,
> >> Andrew.
> >>
> >>> Best,
> >>> Ori
> >>>
> >>>>>>
> >>>>>> Signed-off-by: Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> >>>>>> ---
> >>>>>>     doc/guides/rel_notes/deprecation.rst | 5 +++++
> >>>>>>     1 file changed, 5 insertions(+)
> >>>>>>
> >>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
> >>>>>> b/doc/guides/rel_notes/deprecation.rst
> >>>>>> index d9c0e65921..6e6413c89f 100644
> >>>>>> --- a/doc/guides/rel_notes/deprecation.rst
> >>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
> >>>>>> @@ -158,3 +158,8 @@ Deprecation Notices
> >>>>>>     * security: The functions ``rte_security_set_pkt_metadata`` and
> >>>>>>       ``rte_security_get_userdata`` will be made inline functions
> >>>>>> and additional
> >>>>>>       flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
> >>>>>> +
> >>>>>> +* ethdev: Definition of the flow API action PORT_ID is ambiguous
> >>>>>> +and
> >>>>>> needs
> >>>>>> +  clarification. Structure rte_flow_action_port_id will be
> >>>>>> +extended to
> >>>>>> +  specify traffic direction to represented entity or ethdev port
> >>>>>> itself in
> >>>>>> +  DPDK 21.11.
> >>>>>> --
> >>>>>> 2.30.2
> >>>>>>
> >>>
> >


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
  2021-08-01 16:13  0%             ` Ori Kam
@ 2021-08-01 20:09  0%               ` Andrew Rybchenko
  2021-08-02  7:28  0%                 ` Ori Kam
  0 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-08-01 20:09 UTC (permalink / raw)
  To: Ori Kam, Eli Britstein, NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Ilya Maximets, Ajit Khaparde, Matan Azrad, Ivan Malov,
	Viacheslav Galaktionov

On 8/1/21 7:13 PM, Ori Kam wrote:
> Hi  Andrew,
> 
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Sunday, August 1, 2021 4:24 PM
>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
>>
>> On 8/1/21 3:56 PM, Ori Kam wrote:
>>> Hi Andrew,
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Sunday, August 1, 2021 3:44 PM
>>>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
>>>> changes
>>>>
>>>> Hi Ori,
>>>>
>>>> On 8/1/21 3:23 PM, Ori Kam wrote:
>>>>> Hi Andrew,
>>>>>
>>>>> I think before we can change the API we must agree on the meaning of
>>>> representor.
>>>>
>>>> The question is not directly related to a representor definition.
>>>> Just indirectly. PORT_ID action makes sense for non-representor ports
>>>> as well.
>>>>
>>>>> PSB more comments
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>> Sent: Sunday, August 1, 2021 3:04 PM
>>>>>> To: Eli Britstein <elibr@nvidia.com>; NBU-Contact-Thomas Monjalon
>>>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Ori
>>>>>> Kam <orika@nvidia.com>
>>>>>> Cc: dev@dpdk.org; Ilya Maximets <i.maximets@ovn.org>; Ajit
>> Khaparde
>>>>>> <ajit.khaparde@broadcom.com>; Matan Azrad <matan@nvidia.com>;
>>>> Ivan
>>>>>> Malov <ivan.malov@oktetlabs.ru>; Viacheslav Galaktionov
>>>>>> <viacheslav.galaktionov@oktetlabs.ru>
>>>>>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
>>>>>> changes
>>>>>>
>>>>>> On 8/1/21 1:57 PM, Eli Britstein wrote:
>>>>>>>
>>>>>>> On 8/1/2021 1:22 PM, Andrew Rybchenko wrote:
>>>>>>>> External email: Use caution opening links or attachments
>>>>>>>>
>>>>>>>>
>>>>>>>> By its very name, action PORT_ID means that packets hit an ethdev
>>>>>>>> with the given DPDK port ID. At least the current comments don't
>>>>>>>> state the opposite.
>>>>>>>> That said, since port representors had been adopted, applications
>>>>>>>> like OvS have been misusing the action. They misread its purpose
>>>>>>>> as sending packets to the opposite end of the "wire" plugged to
>>>>>>>> the given ethdev, for example, redirecting packets to the VF
>>>>>>>> itself rather than to its representor ethdev.
>>>>>>>> Another example: OvS relies on this action with the admin PF's
>>>>>>>> ethdev port ID specified in it in order to send offloaded packets
>>>>>>>> to the physical port.
>>>>>>>>
>>>>>>>> Since there might be applications which use this action in its
>>>>>>>> valid sense, one can't just change the documentation to
>>>>>>>> greenlight the opposite meaning.
>>>>>>>>
>>>>>>>> The documentation must be clarified and rte_flow_action_port_id
>>>>>>>> structure should be extended to support both meanings.
>>>>>>>
>>>>>>> I think the only clarification needed is that PORT_ID acts as if
>>>>>>> rte_eth_tx_burst is called with the specified port-id.
>>>>>>
>>>>>> Sorry, but I still think that it is opposite meaning to the current
>>>>>> documentation which says "Directs matching traffic to a given DPDK
>>>>>> port
>>>> ID."
>>>>>> Since it happens on switching level (transfer rule) "to a given DPDK
>> port"
>>>>>> means that it will be received on a given DPDK port.
>>>>>>
>>>>>> Anyway, the goal of the deprecation notice is to highlight that it
>>>>>> must be fixed and ensure that we can choose right decision even if
>>>>>> it
>>>> breaks API/ABI.
>>>>>>
>>>>> Agree, it is good that you created the announcement.
>>>>
>>>> Hopefully you agree that the area requires clarification and must be
>>>> improved. I think so hot discussions really prove it.
>>>>
>>> +1
>>>
>>>>> I think we should continue our discussion on what is a representor.
>>>>
>>>> Yes, but it is a hard topic. I'd like to unbind PORT_ID action from
>>>> the discussion, since the action makes sense for non-representors as well.
>>>>
>>> If this can be done great, I'm for it, but I'm not sure it can be, but let's try.
>>>
>>>>> I think for current implementation the doc should say "direct /
>>>>> matches traffic to / from the switch port which the selected DPDK
>>>>> representor port is connected to or to DPDK port if this port is not
>>>>> a
>>>> representor."
>>>>
>>>> IMHO it is better to keep the definition of the action simple and do
>>>> not have any representor specifics in it. Representor is an ethdev
>>>> port. If we direct traffic to an ethdev port, it should be received
>>>> on the ethdev port regardless if it is a representor or not.
>>>> It is better to avoid exceptions and special cases.
>>>>
>>>
>>> Lets see if I understand correctly, you suggest that port  action /
>>> item will be for DPDK port, unless they are marked with some bit which
>>> means that the traffic should be routed to the switch port which the
>>> DPDK port represent am I correct?
>>
>> Here I'm talking about PORT_ID action only. As for details, I've tried to keep it
>> out-of-scope of the deprecation notice.
>>
> +1 but we need to check if we need it at all or just change doc.
> 
>> However, since we are going to break something here, it is better to break
>> hard to be sure that every since usage is updated. So, I tend to to solution
>> suggested by Ilya [1] which is similar to Linux kernel.
>> I.e. add an enum with invalid zero value and two members to specify
>> direction.
>>
>> [1]
>> https://patches.dpdk.org/project/dpdk/patch/20210601111420.5549-1-
>> ivan.malov@oktetlabs.ru/#133431
>>
>> as for PORT_ID pattern item, I think ingress/egress attributes define
>> direction. If it is an ingress flow rule, PORT_ID item should match traffic
>> coming from represented entity in the case of port representor and
>> associated network port in the case of ethdev port associated with it. In
>> egress case it otherwise matches traffic sent using Tx burst via corresponding
>> ethdev port.
>>
> I think that Ingress egress has only meaning when talking about NIC steering
> and not E-Switch steering.

See [2]  12.2.2.4. Attribute: Transfer last paragraph.

[2] https://doc.dpdk.org/guides/prog_guide/rte_flow.html#attributes

In fact I was going to submit one more deprecation notice on the topic
to clarify it, but reread the documentation and now think that it is
good enough.

> I think that we can just use original bit to mark if we want to send traffic
> to DPDK port or to other port.

As I say the problem of the solution is that a silent breakage.
It is typically bad since  old code can simply misuse it.

> In any case I will be happy if we could have a meeting to discuss this
> approach before sending your patch.

Please, let the deprecation notice in. In whatever direction we fix it,
we'll break something in any case and DPDK users must be warned in
advance. We either change definition of the action or change support
of the action in drivers (in different ways in different drivers) or
do both.

> I think this can save a lot of time.

It is a good idea, let's schedule to the end of August. I guess many
of us have vacations now or in the nearest time. It will be simply
hard to find time in the nearest 3 weeks which is good for all or
at least majority of us.

Thanks,
Andrew.

> Best,
> Ori
> 
> 
>>>>> If we go this way there is no need to change the API only the doc.
>>>>>
>>>>>>> Regarding representors, it's not different. When using TX on a
>>>>>>> representor port, the packets appear as RX on its represented port.
>>>>>>>
>>>>>>> Please elaborate if there is a use case for the PORT_ID~ in which
>>>>>>> the app can get the packets using rte_eth_rx_burst on the
>>>>>>> specified
>>>> port-id.
>>>>>>
>>>>>> Multi-home host with a NIC with two physical ports and two PFs used
>>>>>> by DPDK app with layer 3 (IP addresses). Different cores used to
>>>>>> handle traffic from different ports plus routing in DPDK app. If
>>>>>> traffic to port #0 IP address is received on phys port #1, it is
>>>>>> useful to redirect traffic to port ID 0 directly to have these
>>>>>> packets on correct CPU cores from the very beginning to avoid SW
>>>> mechanisms to pass from port #1 CPU cores to port #0 CPU cores.
>>>>>>
>>>>> To make sure I understand you are talking about a DPDK application
>>>>> that is connected to number of ports and it is Eswitch manager, but
>>>>> it doesn't use representors but the actual ports, right?
>>>>> I think the definition I wrote above also works for this case.
>>>>
>>>> Other possible request is to direct traffic from phys port #0 to phys
>>>> port #1 directly and say it in terms of PORT_ID action.
>>>>
>>> But we are talking using the switch layer(transfer mode) right?
>>
>> Yes.
>>
>>> Best,
>>> Ori
>>>> Thanks,
>>>> Andrew.
>>>>
>>>>> Best,
>>>>> Ori
>>>>>
>>>>>>>>
>>>>>>>> Signed-off-by: Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru>
>>>>>>>> ---
>>>>>>>>      doc/guides/rel_notes/deprecation.rst | 5 +++++
>>>>>>>>      1 file changed, 5 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>>> index d9c0e65921..6e6413c89f 100644
>>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>>> @@ -158,3 +158,8 @@ Deprecation Notices
>>>>>>>>      * security: The functions ``rte_security_set_pkt_metadata`` and
>>>>>>>>        ``rte_security_get_userdata`` will be made inline functions
>>>>>>>> and additional
>>>>>>>>        flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
>>>>>>>> +
>>>>>>>> +* ethdev: Definition of the flow API action PORT_ID is ambiguous
>>>>>>>> +and
>>>>>>>> needs
>>>>>>>> +  clarification. Structure rte_flow_action_port_id will be
>>>>>>>> +extended to
>>>>>>>> +  specify traffic direction to represented entity or ethdev port
>>>>>>>> itself in
>>>>>>>> +  DPDK 21.11.
>>>>>>>> --
>>>>>>>> 2.30.2
>>>>>>>>
>>>>>
>>>
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
  2021-08-01 20:09  0%               ` Andrew Rybchenko
@ 2021-08-02  7:28  0%                 ` Ori Kam
  2021-08-02 10:11  0%                   ` Andrew Rybchenko
  0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2021-08-02  7:28 UTC (permalink / raw)
  To: Andrew Rybchenko, Eli Britstein, NBU-Contact-Thomas Monjalon,
	Ferruh Yigit
  Cc: dev, Ilya Maximets, Ajit Khaparde, Matan Azrad, Ivan Malov,
	Viacheslav Galaktionov



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> 
> On 8/1/21 7:13 PM, Ori Kam wrote:
> > Hi  Andrew,
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Sunday, August 1, 2021 4:24 PM
> >> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
> >> changes
> >>
> >> On 8/1/21 3:56 PM, Ori Kam wrote:
> >>> Hi Andrew,
> >>>
> >>>> -----Original Message-----
> >>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>> Sent: Sunday, August 1, 2021 3:44 PM
> >>>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
> >>>> changes
> >>>>
> >>>> Hi Ori,
> >>>>
> >>>> On 8/1/21 3:23 PM, Ori Kam wrote:
> >>>>> Hi Andrew,
> >>>>>
> >>>>> I think before we can change the API we must agree on the meaning
> >>>>> of
> >>>> representor.
> >>>>
> >>>> The question is not directly related to a representor definition.
> >>>> Just indirectly. PORT_ID action makes sense for non-representor
> >>>> ports as well.
> >>>>
> >>>>> PSB more comments
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >>>>>> Sent: Sunday, August 1, 2021 3:04 PM
> >>>>>> To: Eli Britstein <elibr@nvidia.com>; NBU-Contact-Thomas Monjalon
> >>>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Ori
> >>>>>> Kam <orika@nvidia.com>
> >>>>>> Cc: dev@dpdk.org; Ilya Maximets <i.maximets@ovn.org>; Ajit
> >> Khaparde
> >>>>>> <ajit.khaparde@broadcom.com>; Matan Azrad
> <matan@nvidia.com>;
> >>>> Ivan
> >>>>>> Malov <ivan.malov@oktetlabs.ru>; Viacheslav Galaktionov
> >>>>>> <viacheslav.galaktionov@oktetlabs.ru>
> >>>>>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
> >>>>>> changes
> >>>>>>
> >>>>>> On 8/1/21 1:57 PM, Eli Britstein wrote:
> >>>>>>>
> >>>>>>> On 8/1/2021 1:22 PM, Andrew Rybchenko wrote:
> >>>>>>>> External email: Use caution opening links or attachments
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> By its very name, action PORT_ID means that packets hit an
> >>>>>>>> ethdev with the given DPDK port ID. At least the current
> >>>>>>>> comments don't state the opposite.
> >>>>>>>> That said, since port representors had been adopted,
> >>>>>>>> applications like OvS have been misusing the action. They
> >>>>>>>> misread its purpose as sending packets to the opposite end of
> >>>>>>>> the "wire" plugged to the given ethdev, for example,
> >>>>>>>> redirecting packets to the VF itself rather than to its representor
> ethdev.
> >>>>>>>> Another example: OvS relies on this action with the admin PF's
> >>>>>>>> ethdev port ID specified in it in order to send offloaded
> >>>>>>>> packets to the physical port.
> >>>>>>>>
> >>>>>>>> Since there might be applications which use this action in its
> >>>>>>>> valid sense, one can't just change the documentation to
> >>>>>>>> greenlight the opposite meaning.
> >>>>>>>>
> >>>>>>>> The documentation must be clarified and rte_flow_action_port_id
> >>>>>>>> structure should be extended to support both meanings.
> >>>>>>>
> >>>>>>> I think the only clarification needed is that PORT_ID acts as if
> >>>>>>> rte_eth_tx_burst is called with the specified port-id.
> >>>>>>
> >>>>>> Sorry, but I still think that it is opposite meaning to the
> >>>>>> current documentation which says "Directs matching traffic to a
> >>>>>> given DPDK port
> >>>> ID."
> >>>>>> Since it happens on switching level (transfer rule) "to a given
> >>>>>> DPDK
> >> port"
> >>>>>> means that it will be received on a given DPDK port.
> >>>>>>
> >>>>>> Anyway, the goal of the deprecation notice is to highlight that
> >>>>>> it must be fixed and ensure that we can choose right decision
> >>>>>> even if it
> >>>> breaks API/ABI.
> >>>>>>
> >>>>> Agree, it is good that you created the announcement.
> >>>>
> >>>> Hopefully you agree that the area requires clarification and must
> >>>> be improved. I think so hot discussions really prove it.
> >>>>
> >>> +1
> >>>
> >>>>> I think we should continue our discussion on what is a representor.
> >>>>
> >>>> Yes, but it is a hard topic. I'd like to unbind PORT_ID action from
> >>>> the discussion, since the action makes sense for non-representors as
> well.
> >>>>
> >>> If this can be done great, I'm for it, but I'm not sure it can be, but let's
> try.
> >>>
> >>>>> I think for current implementation the doc should say "direct /
> >>>>> matches traffic to / from the switch port which the selected DPDK
> >>>>> representor port is connected to or to DPDK port if this port is
> >>>>> not a
> >>>> representor."
> >>>>
> >>>> IMHO it is better to keep the definition of the action simple and
> >>>> do not have any representor specifics in it. Representor is an
> >>>> ethdev port. If we direct traffic to an ethdev port, it should be
> >>>> received on the ethdev port regardless if it is a representor or not.
> >>>> It is better to avoid exceptions and special cases.
> >>>>
> >>>
> >>> Lets see if I understand correctly, you suggest that port  action /
> >>> item will be for DPDK port, unless they are marked with some bit
> >>> which means that the traffic should be routed to the switch port
> >>> which the DPDK port represent am I correct?
> >>
> >> Here I'm talking about PORT_ID action only. As for details, I've
> >> tried to keep it out-of-scope of the deprecation notice.
> >>
> > +1 but we need to check if we need it at all or just change doc.
> >
> >> However, since we are going to break something here, it is better to
> >> break hard to be sure that every since usage is updated. So, I tend
> >> to to solution suggested by Ilya [1] which is similar to Linux kernel.
> >> I.e. add an enum with invalid zero value and two members to specify
> >> direction.
> >>
> >> [1]
> >> https://patches.dpdk.org/project/dpdk/patch/20210601111420.5549-1-
> >> ivan.malov@oktetlabs.ru/#133431
> >>
> >> as for PORT_ID pattern item, I think ingress/egress attributes define
> >> direction. If it is an ingress flow rule, PORT_ID item should match
> >> traffic coming from represented entity in the case of port
> >> representor and associated network port in the case of ethdev port
> >> associated with it. In egress case it otherwise matches traffic sent
> >> using Tx burst via corresponding ethdev port.
> >>
> > I think that Ingress egress has only meaning when talking about NIC
> > steering and not E-Switch steering.
> 
> See [2]  12.2.2.4. Attribute: Transfer last paragraph.
> 
> [2] https://doc.dpdk.org/guides/prog_guide/rte_flow.html#attributes
> 
> In fact I was going to submit one more deprecation notice on the topic to
> clarify it, but reread the documentation and now think that it is good enough.
> 

I think this needs to change, 
" When transferring flow rules, ingress and egress attributes (Attribute: Traffic direction) keep their original meaning, 
as if processing traffic emitted or received by the application."
But if we route traffic between vports was is the app direction?
For example if sending traffic from VF A to VF B (app is on PF)
is it ingress or egress traffic? If the direction is reverse (B to A) does it change?
what if we are sending traffic from VF A to wire or from wire to A what is ingress / egress?
(Assuming that the VFs are connected to different application.)



> > I think that we can just use original bit to mark if we want to send
> > traffic to DPDK port or to other port.
> 
> As I say the problem of the solution is that a silent breakage.
> It is typically bad since  old code can simply misuse it.
> 
You have a point but then maybe we should also delete this bit.
Also I don't like the idea to break almost all apps that are using DPDK.
especially if it will not cause error on build.
just adding more fields will break the app logic not the compilation which
I think is the worst thing. (large number of application are based on
the current logic)

> > In any case I will be happy if we could have a meeting to discuss this
> > approach before sending your patch.
> 
> Please, let the deprecation notice in. In whatever direction we fix it, we'll
> break something in any case and DPDK users must be warned in advance.
> We either change definition of the action or change support of the action in
> drivers (in different ways in different drivers) or do both.

O.K.
> 
> > I think this can save a lot of time.
> 
> It is a good idea, let's schedule to the end of August. I guess many of us have
> vacations now or in the nearest time. It will be simply hard to find time in the
> nearest 3 weeks which is good for all or at least majority of us.
> 

Sure.
Best,
Ori
> Thanks,
> Andrew.
> 
> > Best,
> > Ori
> >
> >
> >>>>> If we go this way there is no need to change the API only the doc.
> >>>>>
> >>>>>>> Regarding representors, it's not different. When using TX on a
> >>>>>>> representor port, the packets appear as RX on its represented port.
> >>>>>>>
> >>>>>>> Please elaborate if there is a use case for the PORT_ID~ in
> >>>>>>> which the app can get the packets using rte_eth_rx_burst on the
> >>>>>>> specified
> >>>> port-id.
> >>>>>>
> >>>>>> Multi-home host with a NIC with two physical ports and two PFs
> >>>>>> used by DPDK app with layer 3 (IP addresses). Different cores
> >>>>>> used to handle traffic from different ports plus routing in DPDK
> >>>>>> app. If traffic to port #0 IP address is received on phys port
> >>>>>> #1, it is useful to redirect traffic to port ID 0 directly to
> >>>>>> have these packets on correct CPU cores from the very beginning
> >>>>>> to avoid SW
> >>>> mechanisms to pass from port #1 CPU cores to port #0 CPU cores.
> >>>>>>
> >>>>> To make sure I understand you are talking about a DPDK application
> >>>>> that is connected to number of ports and it is Eswitch manager,
> >>>>> but it doesn't use representors but the actual ports, right?
> >>>>> I think the definition I wrote above also works for this case.
> >>>>
> >>>> Other possible request is to direct traffic from phys port #0 to
> >>>> phys port #1 directly and say it in terms of PORT_ID action.
> >>>>
> >>> But we are talking using the switch layer(transfer mode) right?
> >>
> >> Yes.
> >>
> >>> Best,
> >>> Ori
> >>>> Thanks,
> >>>> Andrew.
> >>>>
> >>>>> Best,
> >>>>> Ori
> >>>>>
> >>>>>>>>
> >>>>>>>> Signed-off-by: Andrew Rybchenko
> >> <andrew.rybchenko@oktetlabs.ru>
> >>>>>>>> ---
> >>>>>>>>      doc/guides/rel_notes/deprecation.rst | 5 +++++
> >>>>>>>>      1 file changed, 5 insertions(+)
> >>>>>>>>
> >>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
> >>>>>>>> b/doc/guides/rel_notes/deprecation.rst
> >>>>>>>> index d9c0e65921..6e6413c89f 100644
> >>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
> >>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
> >>>>>>>> @@ -158,3 +158,8 @@ Deprecation Notices
> >>>>>>>>      * security: The functions ``rte_security_set_pkt_metadata`` and
> >>>>>>>>        ``rte_security_get_userdata`` will be made inline
> >>>>>>>> functions and additional
> >>>>>>>>        flags will be added in structure ``rte_security_ctx`` in DPDK
> 21.11.
> >>>>>>>> +
> >>>>>>>> +* ethdev: Definition of the flow API action PORT_ID is
> >>>>>>>> +ambiguous and
> >>>>>>>> needs
> >>>>>>>> +  clarification. Structure rte_flow_action_port_id will be
> >>>>>>>> +extended to
> >>>>>>>> +  specify traffic direction to represented entity or ethdev
> >>>>>>>> +port
> >>>>>>>> itself in
> >>>>>>>> +  DPDK 21.11.
> >>>>>>>> --
> >>>>>>>> 2.30.2
> >>>>>>>>
> >>>>>
> >>>
> >


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes
  2021-08-02  7:28  0%                 ` Ori Kam
@ 2021-08-02 10:11  0%                   ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-08-02 10:11 UTC (permalink / raw)
  To: Ori Kam, Eli Britstein, NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Ilya Maximets, Ajit Khaparde, Matan Azrad, Ivan Malov,
	Viacheslav Galaktionov

Hi Ori,

On 8/2/21 10:28 AM, Ori Kam wrote:
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>
>> On 8/1/21 7:13 PM, Ori Kam wrote:
>>> Hi  Andrew,
>>>
>>>> -----Original Message-----
>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>> Sent: Sunday, August 1, 2021 4:24 PM
>>>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
>>>> changes
>>>>
>>>> On 8/1/21 3:56 PM, Ori Kam wrote:
>>>>> Hi Andrew,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>> Sent: Sunday, August 1, 2021 3:44 PM
>>>>>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
>>>>>> changes
>>>>>>
>>>>>> Hi Ori,
>>>>>>
>>>>>> On 8/1/21 3:23 PM, Ori Kam wrote:
>>>>>>> Hi Andrew,
>>>>>>>
>>>>>>> I think before we can change the API we must agree on the meaning
>>>>>>> of
>>>>>> representor.
>>>>>>
>>>>>> The question is not directly related to a representor definition.
>>>>>> Just indirectly. PORT_ID action makes sense for non-representor
>>>>>> ports as well.
>>>>>>
>>>>>>> PSB more comments
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>>>>>>>> Sent: Sunday, August 1, 2021 3:04 PM
>>>>>>>> To: Eli Britstein <elibr@nvidia.com>; NBU-Contact-Thomas Monjalon
>>>>>>>> <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>; Ori
>>>>>>>> Kam <orika@nvidia.com>
>>>>>>>> Cc: dev@dpdk.org; Ilya Maximets <i.maximets@ovn.org>; Ajit
>>>> Khaparde
>>>>>>>> <ajit.khaparde@broadcom.com>; Matan Azrad
>> <matan@nvidia.com>;
>>>>>> Ivan
>>>>>>>> Malov <ivan.malov@oktetlabs.ru>; Viacheslav Galaktionov
>>>>>>>> <viacheslav.galaktionov@oktetlabs.ru>
>>>>>>>> Subject: Re: [PATCH 1/2] ethdev: announce flow API action PORT_ID
>>>>>>>> changes
>>>>>>>>
>>>>>>>> On 8/1/21 1:57 PM, Eli Britstein wrote:
>>>>>>>>>
>>>>>>>>> On 8/1/2021 1:22 PM, Andrew Rybchenko wrote:
>>>>>>>>>> External email: Use caution opening links or attachments
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> By its very name, action PORT_ID means that packets hit an
>>>>>>>>>> ethdev with the given DPDK port ID. At least the current
>>>>>>>>>> comments don't state the opposite.
>>>>>>>>>> That said, since port representors had been adopted,
>>>>>>>>>> applications like OvS have been misusing the action. They
>>>>>>>>>> misread its purpose as sending packets to the opposite end of
>>>>>>>>>> the "wire" plugged to the given ethdev, for example,
>>>>>>>>>> redirecting packets to the VF itself rather than to its representor
>> ethdev.
>>>>>>>>>> Another example: OvS relies on this action with the admin PF's
>>>>>>>>>> ethdev port ID specified in it in order to send offloaded
>>>>>>>>>> packets to the physical port.
>>>>>>>>>>
>>>>>>>>>> Since there might be applications which use this action in its
>>>>>>>>>> valid sense, one can't just change the documentation to
>>>>>>>>>> greenlight the opposite meaning.
>>>>>>>>>>
>>>>>>>>>> The documentation must be clarified and rte_flow_action_port_id
>>>>>>>>>> structure should be extended to support both meanings.
>>>>>>>>>
>>>>>>>>> I think the only clarification needed is that PORT_ID acts as if
>>>>>>>>> rte_eth_tx_burst is called with the specified port-id.
>>>>>>>>
>>>>>>>> Sorry, but I still think that it is opposite meaning to the
>>>>>>>> current documentation which says "Directs matching traffic to a
>>>>>>>> given DPDK port
>>>>>> ID."
>>>>>>>> Since it happens on switching level (transfer rule) "to a given
>>>>>>>> DPDK
>>>> port"
>>>>>>>> means that it will be received on a given DPDK port.
>>>>>>>>
>>>>>>>> Anyway, the goal of the deprecation notice is to highlight that
>>>>>>>> it must be fixed and ensure that we can choose right decision
>>>>>>>> even if it
>>>>>> breaks API/ABI.
>>>>>>>>
>>>>>>> Agree, it is good that you created the announcement.
>>>>>>
>>>>>> Hopefully you agree that the area requires clarification and must
>>>>>> be improved. I think so hot discussions really prove it.
>>>>>>
>>>>> +1
>>>>>
>>>>>>> I think we should continue our discussion on what is a representor.
>>>>>>
>>>>>> Yes, but it is a hard topic. I'd like to unbind PORT_ID action from
>>>>>> the discussion, since the action makes sense for non-representors as
>> well.
>>>>>>
>>>>> If this can be done great, I'm for it, but I'm not sure it can be, but let's
>> try.
>>>>>
>>>>>>> I think for current implementation the doc should say "direct /
>>>>>>> matches traffic to / from the switch port which the selected DPDK
>>>>>>> representor port is connected to or to DPDK port if this port is
>>>>>>> not a
>>>>>> representor."
>>>>>>
>>>>>> IMHO it is better to keep the definition of the action simple and
>>>>>> do not have any representor specifics in it. Representor is an
>>>>>> ethdev port. If we direct traffic to an ethdev port, it should be
>>>>>> received on the ethdev port regardless if it is a representor or not.
>>>>>> It is better to avoid exceptions and special cases.
>>>>>>
>>>>>
>>>>> Lets see if I understand correctly, you suggest that port  action /
>>>>> item will be for DPDK port, unless they are marked with some bit
>>>>> which means that the traffic should be routed to the switch port
>>>>> which the DPDK port represent am I correct?
>>>>
>>>> Here I'm talking about PORT_ID action only. As for details, I've
>>>> tried to keep it out-of-scope of the deprecation notice.
>>>>
>>> +1 but we need to check if we need it at all or just change doc.
>>>
>>>> However, since we are going to break something here, it is better to
>>>> break hard to be sure that every since usage is updated. So, I tend
>>>> to to solution suggested by Ilya [1] which is similar to Linux kernel.
>>>> I.e. add an enum with invalid zero value and two members to specify
>>>> direction.
>>>>
>>>> [1]
>>>> https://patches.dpdk.org/project/dpdk/patch/20210601111420.5549-1-
>>>> ivan.malov@oktetlabs.ru/#133431
>>>>
>>>> as for PORT_ID pattern item, I think ingress/egress attributes define
>>>> direction. If it is an ingress flow rule, PORT_ID item should match
>>>> traffic coming from represented entity in the case of port
>>>> representor and associated network port in the case of ethdev port
>>>> associated with it. In egress case it otherwise matches traffic sent
>>>> using Tx burst via corresponding ethdev port.
>>>>
>>> I think that Ingress egress has only meaning when talking about NIC
>>> steering and not E-Switch steering.
>>
>> See [2]  12.2.2.4. Attribute: Transfer last paragraph.
>>
>> [2] https://doc.dpdk.org/guides/prog_guide/rte_flow.html#attributes
>>
>> In fact I was going to submit one more deprecation notice on the topic to
>> clarify it, but reread the documentation and now think that it is good enough.
>>
> 
> I think this needs to change,
> " When transferring flow rules, ingress and egress attributes (Attribute: Traffic direction) keep their original meaning,
> as if processing traffic emitted or received by the application."
> But if we route traffic between vports was is the app direction?
> For example if sending traffic from VF A to VF B (app is on PF)
> is it ingress or egress traffic? If the direction is reverse (B to A) does it change?

It is ingress since it would go DPDK app if we don't reroute it to VF B
directly. I think that egress is what is sent by DPDK app. Everything 
else is ingress. I.e. egress rules are applied to traffic which is
generated by the DPDK application itself.

> what if we are sending traffic from VF A to wire or from wire to A what is ingress / egress?
> (Assuming that the VFs are connected to different application.)

See above. It is all ingress rules, since it is applied on traffic which
is not generated by the DPDK application which inserts these rules.

>>> I think that we can just use original bit to mark if we want to send
>>> traffic to DPDK port or to other port.
>>
>> As I say the problem of the solution is that a silent breakage.
>> It is typically bad since  old code can simply misuse it.
>>
> You have a point but then maybe we should also delete this bit.
> Also I don't like the idea to break almost all apps that are using DPDK.
> especially if it will not cause error on build.

We can rename a field, to cause errors on build :)

> just adding more fields will break the app logic not the compilation which
> I think is the worst thing. (large number of application are based on
> the current logic)

The problem is that it could be different logic because of ambiguous
definition.

>>> In any case I will be happy if we could have a meeting to discuss this
>>> approach before sending your patch.
>>
>> Please, let the deprecation notice in. In whatever direction we fix it, we'll
>> break something in any case and DPDK users must be warned in advance.
>> We either change definition of the action or change support of the action in
>> drivers (in different ways in different drivers) or do both.
> 
> O.K.

Great, many thanks.

Andrew.

>>> I think this can save a lot of time.
>>
>> It is a good idea, let's schedule to the end of August. I guess many of us have
>> vacations now or in the nearest time. It will be simply hard to find time in the
>> nearest 3 weeks which is good for all or at least majority of us.
>>
> 
> Sure.
> Best,
> Ori
>> Thanks,
>> Andrew.
>>
>>> Best,
>>> Ori
>>>
>>>
>>>>>>> If we go this way there is no need to change the API only the doc.
>>>>>>>
>>>>>>>>> Regarding representors, it's not different. When using TX on a
>>>>>>>>> representor port, the packets appear as RX on its represented port.
>>>>>>>>>
>>>>>>>>> Please elaborate if there is a use case for the PORT_ID~ in
>>>>>>>>> which the app can get the packets using rte_eth_rx_burst on the
>>>>>>>>> specified
>>>>>> port-id.
>>>>>>>>
>>>>>>>> Multi-home host with a NIC with two physical ports and two PFs
>>>>>>>> used by DPDK app with layer 3 (IP addresses). Different cores
>>>>>>>> used to handle traffic from different ports plus routing in DPDK
>>>>>>>> app. If traffic to port #0 IP address is received on phys port
>>>>>>>> #1, it is useful to redirect traffic to port ID 0 directly to
>>>>>>>> have these packets on correct CPU cores from the very beginning
>>>>>>>> to avoid SW
>>>>>> mechanisms to pass from port #1 CPU cores to port #0 CPU cores.
>>>>>>>>
>>>>>>> To make sure I understand you are talking about a DPDK application
>>>>>>> that is connected to number of ports and it is Eswitch manager,
>>>>>>> but it doesn't use representors but the actual ports, right?
>>>>>>> I think the definition I wrote above also works for this case.
>>>>>>
>>>>>> Other possible request is to direct traffic from phys port #0 to
>>>>>> phys port #1 directly and say it in terms of PORT_ID action.
>>>>>>
>>>>> But we are talking using the switch layer(transfer mode) right?
>>>>
>>>> Yes.
>>>>
>>>>> Best,
>>>>> Ori
>>>>>> Thanks,
>>>>>> Andrew.
>>>>>>
>>>>>>> Best,
>>>>>>> Ori
>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Signed-off-by: Andrew Rybchenko
>>>> <andrew.rybchenko@oktetlabs.ru>
>>>>>>>>>> ---
>>>>>>>>>>       doc/guides/rel_notes/deprecation.rst | 5 +++++
>>>>>>>>>>       1 file changed, 5 insertions(+)
>>>>>>>>>>
>>>>>>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>> index d9c0e65921..6e6413c89f 100644
>>>>>>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>>>>>>> @@ -158,3 +158,8 @@ Deprecation Notices
>>>>>>>>>>       * security: The functions ``rte_security_set_pkt_metadata`` and
>>>>>>>>>>         ``rte_security_get_userdata`` will be made inline
>>>>>>>>>> functions and additional
>>>>>>>>>>         flags will be added in structure ``rte_security_ctx`` in DPDK
>> 21.11.
>>>>>>>>>> +
>>>>>>>>>> +* ethdev: Definition of the flow API action PORT_ID is
>>>>>>>>>> +ambiguous and
>>>>>>>>>> needs
>>>>>>>>>> +  clarification. Structure rte_flow_action_port_id will be
>>>>>>>>>> +extended to
>>>>>>>>>> +  specify traffic direction to represented entity or ethdev
>>>>>>>>>> +port
>>>>>>>>>> itself in
>>>>>>>>>> +  DPDK 21.11.
>>>>>>>>>> --
>>>>>>>>>> 2.30.2
>>>>>>>>>>
>>>>>>>
>>>>>
>>>
> 


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [dpdk-announce] URGENT: review of deprecation notices before closing 21.08
@ 2021-08-02 12:33  3% Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-02 12:33 UTC (permalink / raw)
  To: announce

The next release 21.11 will allow API/ABI breaking changes.
The process is to announce such changes in the previous release notes.
We are closing the release 21.08 this week so it becomes very urgent
to review all these notices and vote (ack) or reject them now.

For convenience, I am adding those patches in a bundle for easy review:
https://patches.dpdk.org/bundle/tmonjalo/deprecation-notices/

Thanks for participating



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v4] doc: announce API changes for Windows compatibility
  @ 2021-08-02 13:00  3%         ` Dmitry Kozlyuk
  2021-08-02 13:48  0%           ` Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2021-08-02 13:00 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: Thomas Monjalon, dev, Ferruh Yigit, Fiona Trahe, Khoa To,
	Ray Kinsella, andrew.rybchenko, olivier.matz, navasile,
	pallavi.kadam, ranjit.menon, bruce.richardson, stephen

2021-08-02 12:45 (UTC+0000), Akhil Goyal:
> > 21/07/2021 21:55, Dmitry Kozlyuk:  
> > > Windows headers define `s_addr`, `min`, and `max` as macros.
> > > If DPDK headers are included after Windows ones, DPDK structure
> > > definitions containing fields with these names get broken (example 1),
> > > as well as any usage of such fields (example 2). If DPDK headers
> > > undefined these macros, it could break consumer code (example 3).
> > > It is proposed to rename structure fields in DPDK, because Win32 headers
> > > are used more widely than DPDK, as a general-purpose platform compared
> > > to domain-specific kit, and are harder to fix because of that.
> > > Exact new names are left for further discussion.
> > >
> > > Example 1:
> > >
> > >     /* in DPDK public header included after windows.h */
> > >     struct rte_type {
> > >         int min;    /* ERROR: `min` is a macro */
> > >     };
> > >
> > > Example 2:
> > >
> > >     #include <rte_ether.h>
> > >     #include <winsock2.h>
> > >     struct rte_ether_hdr eh;
> > >     eh.s_addr.addr_bytes[0] = 0;    /* ERROR: `addr_s` is a macro */
> > >
> > > Example 3:
> > >
> > >     #include <winsock2.h>
> > >     #include <rte_ether.h>
> > >     struct in_addr addr;
> > >     addr.s_addr = 0;      /* ERROR: there is no `s_addr` field,
> > >                              and `s_addr` macro is undefined by DPDK. */
> > >
> > > Commit 6c068dbd9fea ("net: work around s_addr macro on Windows")
> > > modified definition of `struct rte_ether_hdr` to avoid the issue.
> > > However, the workaround assumes `#define s_addr S_addr.S_un`
> > > in Windows headers, which is not a part of official API.
> > > It also complicates the definition of `struct rte_ether_hdr`.
> > >
> > > Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > > Acked-by: Khoa To <khot@microsoft.com>
> > > ---
> > > +* net: ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
> > > +  will be renamed in DPDK 21.11 to avoid conflict with Windows Sockets  
> > headers.  
> > > +
> > > +* compressdev: ``min`` and ``max`` fields of ``rte_param_log2_range``  
> > structure  
> > > +  will be renamed in DPDK 21.11 to avoid conflict with Windows Sockets  
> > headers.
> > 
> > The struct rte_param_log2_range should also be renamed to include
> > "compress" prefix.
> > But as we break the struct API, it is not an issue I guess.
> >   
> > > +* cryptodev: ``min`` and ``max`` fields of ``rte_crypto_param_range``  
> > structure  
> > > +  will be renamed in DPDK 21.11 to avoid conflict with Windows Sockets  
> > headers.
> > 
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> >   
> Can we have a local variable named as min/max?
> If not, then I believe it is not a good idea.

Yes, except for inline functions in public headers.
The only problematic one I know of is this (rte_lru_x86.h):

static inline int
f_lru_pos(uint64_t lru_list)
{
	__m128i lst = _mm_set_epi64x((uint64_t)-1, lru_list);
	__m128i min = _mm_minpos_epu16(lst); /* <<< */
	return _mm_extract_epi16(min, 1);
}

Fixing it breaks neither API nor ABI, thus no explicit deprecation notice.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v4] doc: announce API changes for Windows compatibility
  2021-08-02 13:00  3%         ` Dmitry Kozlyuk
@ 2021-08-02 13:48  0%           ` Akhil Goyal
  2021-08-02 14:57  0%             ` Tal Shnaiderman
  2021-08-02 17:46  0%             ` Thomas Monjalon
  0 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2021-08-02 13:48 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: Thomas Monjalon, dev, Ferruh Yigit, Fiona Trahe, Khoa To,
	Ray Kinsella, andrew.rybchenko, olivier.matz, navasile,
	pallavi.kadam, ranjit.menon, bruce.richardson, stephen

> 2021-08-02 12:45 (UTC+0000), Akhil Goyal:
> > > 21/07/2021 21:55, Dmitry Kozlyuk:
> > > > Windows headers define `s_addr`, `min`, and `max` as macros.
> > > > If DPDK headers are included after Windows ones, DPDK structure
> > > > definitions containing fields with these names get broken (example 1),
> > > > as well as any usage of such fields (example 2). If DPDK headers
> > > > undefined these macros, it could break consumer code (example 3).
> > > > It is proposed to rename structure fields in DPDK, because Win32
> headers
> > > > are used more widely than DPDK, as a general-purpose platform
> compared
> > > > to domain-specific kit, and are harder to fix because of that.
> > > > Exact new names are left for further discussion.
> > > >
> > > > Example 1:
> > > >
> > > >     /* in DPDK public header included after windows.h */
> > > >     struct rte_type {
> > > >         int min;    /* ERROR: `min` is a macro */
> > > >     };
> > > >
> > > > Example 2:
> > > >
> > > >     #include <rte_ether.h>
> > > >     #include <winsock2.h>
> > > >     struct rte_ether_hdr eh;
> > > >     eh.s_addr.addr_bytes[0] = 0;    /* ERROR: `addr_s` is a macro */
> > > >
> > > > Example 3:
> > > >
> > > >     #include <winsock2.h>
> > > >     #include <rte_ether.h>
> > > >     struct in_addr addr;
> > > >     addr.s_addr = 0;      /* ERROR: there is no `s_addr` field,
> > > >                              and `s_addr` macro is undefined by DPDK. */
> > > >
> > > > Commit 6c068dbd9fea ("net: work around s_addr macro on Windows")
> > > > modified definition of `struct rte_ether_hdr` to avoid the issue.
> > > > However, the workaround assumes `#define s_addr S_addr.S_un`
> > > > in Windows headers, which is not a part of official API.
> > > > It also complicates the definition of `struct rte_ether_hdr`.
> > > >
> > > > Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > > > Acked-by: Khoa To <khot@microsoft.com>
> > > > ---
> > > > +* net: ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr`` structure
> > > > +  will be renamed in DPDK 21.11 to avoid conflict with Windows
> Sockets
> > > headers.
> > > > +
> > > > +* compressdev: ``min`` and ``max`` fields of ``rte_param_log2_range``
> > > structure
> > > > +  will be renamed in DPDK 21.11 to avoid conflict with Windows
> Sockets
> > > headers.
> > >
> > > The struct rte_param_log2_range should also be renamed to include
> > > "compress" prefix.
> > > But as we break the struct API, it is not an issue I guess.
> > >
> > > > +* cryptodev: ``min`` and ``max`` fields of ``rte_crypto_param_range``
> > > structure
> > > > +  will be renamed in DPDK 21.11 to avoid conflict with Windows
> Sockets
> > > headers.
> > >
> > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > >
> > Can we have a local variable named as min/max?
> > If not, then I believe it is not a good idea.
> 
> Yes, except for inline functions in public headers.
> The only problematic one I know of is this (rte_lru_x86.h):
> 
> static inline int
> f_lru_pos(uint64_t lru_list)
> {
> 	__m128i lst = _mm_set_epi64x((uint64_t)-1, lru_list);
> 	__m128i min = _mm_minpos_epu16(lst); /* <<< */
> 	return _mm_extract_epi16(min, 1);
> }
> 
> Fixing it breaks neither API nor ABI, thus no explicit deprecation notice.
OK,
Acked-by: Akhil Goyal <gakhil@marvell.com>

I hope when windows compilation is enabled, it will be part of CI and it will run
on each patch which goes to patchworks.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v4] doc: announce API changes for Windows compatibility
  2021-08-02 13:48  0%           ` Akhil Goyal
@ 2021-08-02 14:57  0%             ` Tal Shnaiderman
  2021-08-02 17:46  0%             ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Tal Shnaiderman @ 2021-08-02 14:57 UTC (permalink / raw)
  To: Akhil Goyal, Dmitry Kozlyuk
  Cc: NBU-Contact-Thomas Monjalon, dev, Ferruh Yigit, Fiona Trahe,
	Khoa To, Ray Kinsella, andrew.rybchenko, olivier.matz, navasile,
	pallavi.kadam, ranjit.menon, bruce.richardson, stephen

> Subject: Re: [dpdk-dev] [EXT] Re: [PATCH v4] doc: announce API changes for
> Windows compatibility
> 
> External email: Use caution opening links or attachments
> 
> 
> > 2021-08-02 12:45 (UTC+0000), Akhil Goyal:
> > > > 21/07/2021 21:55, Dmitry Kozlyuk:
> > > > > Windows headers define `s_addr`, `min`, and `max` as macros.
> > > > > If DPDK headers are included after Windows ones, DPDK structure
> > > > > definitions containing fields with these names get broken
> > > > > (example 1), as well as any usage of such fields (example 2). If
> > > > > DPDK headers undefined these macros, it could break consumer code
> (example 3).
> > > > > It is proposed to rename structure fields in DPDK, because Win32
> > headers
> > > > > are used more widely than DPDK, as a general-purpose platform
> > compared
> > > > > to domain-specific kit, and are harder to fix because of that.
> > > > > Exact new names are left for further discussion.
> > > > >
> > > > > Example 1:
> > > > >
> > > > >     /* in DPDK public header included after windows.h */
> > > > >     struct rte_type {
> > > > >         int min;    /* ERROR: `min` is a macro */
> > > > >     };
> > > > >
> > > > > Example 2:
> > > > >
> > > > >     #include <rte_ether.h>
> > > > >     #include <winsock2.h>
> > > > >     struct rte_ether_hdr eh;
> > > > >     eh.s_addr.addr_bytes[0] = 0;    /* ERROR: `addr_s` is a macro */
> > > > >
> > > > > Example 3:
> > > > >
> > > > >     #include <winsock2.h>
> > > > >     #include <rte_ether.h>
> > > > >     struct in_addr addr;
> > > > >     addr.s_addr = 0;      /* ERROR: there is no `s_addr` field,
> > > > >                              and `s_addr` macro is undefined by
> > > > > DPDK. */
> > > > >
> > > > > Commit 6c068dbd9fea ("net: work around s_addr macro on
> Windows")
> > > > > modified definition of `struct rte_ether_hdr` to avoid the issue.
> > > > > However, the workaround assumes `#define s_addr S_addr.S_un` in
> > > > > Windows headers, which is not a part of official API.
> > > > > It also complicates the definition of `struct rte_ether_hdr`.
> > > > >
> > > > > Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > > > > Acked-by: Khoa To <khot@microsoft.com>
> > > > > ---
> > > > > +* net: ``s_addr`` and ``d_addr`` fields of ``rte_ether_hdr``
> > > > > +structure
> > > > > +  will be renamed in DPDK 21.11 to avoid conflict with Windows
> > Sockets
> > > > headers.
> > > > > +
> > > > > +* compressdev: ``min`` and ``max`` fields of
> > > > > +``rte_param_log2_range``
> > > > structure
> > > > > +  will be renamed in DPDK 21.11 to avoid conflict with Windows
> > Sockets
> > > > headers.
> > > >
> > > > The struct rte_param_log2_range should also be renamed to include
> > > > "compress" prefix.
> > > > But as we break the struct API, it is not an issue I guess.
> > > >
> > > > > +* cryptodev: ``min`` and ``max`` fields of
> > > > > +``rte_crypto_param_range``
> > > > structure
> > > > > +  will be renamed in DPDK 21.11 to avoid conflict with Windows
> > Sockets
> > > > headers.
> > > >
> > > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > >
> > > Can we have a local variable named as min/max?
> > > If not, then I believe it is not a good idea.
> >
> > Yes, except for inline functions in public headers.
> > The only problematic one I know of is this (rte_lru_x86.h):
> >
> > static inline int
> > f_lru_pos(uint64_t lru_list)
> > {
> >       __m128i lst = _mm_set_epi64x((uint64_t)-1, lru_list);
> >       __m128i min = _mm_minpos_epu16(lst); /* <<< */
> >       return _mm_extract_epi16(min, 1); }
> >
> > Fixing it breaks neither API nor ABI, thus no explicit deprecation notice.
> OK,
> Acked-by: Akhil Goyal <gakhil@marvell.com>
> 
> I hope when windows compilation is enabled, it will be part of CI and it will
> run on each patch which goes to patchworks.

Windows compilation is already part of CI in ci/iol-testing and ci/Intel-compilation.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal
@ 2021-08-02 16:03 10% Harman Kalra
  2021-08-02 19:20  0% ` Andrew Rybchenko
  2021-08-03  2:37  0% ` Xia, Chenbo
  0 siblings, 2 replies; 200+ results
From: Harman Kalra @ 2021-08-02 16:03 UTC (permalink / raw)
  To: jerinj, david.marchand, thomas, Ray Kinsella; +Cc: dev, Harman Kalra

Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.

Discussion thread:
https://mails.dpdk.org/archives/dev/2021-March/202959.html

Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://docs.google.com/spreadsheets/d/1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE/edit#gid=0

Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index d9c0e65921..e95574b1ec 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -17,6 +17,9 @@ Deprecation Notices
 * eal: The function ``rte_eal_remote_launch`` will return new error codes
   after read or write error on the pipe, instead of calling ``rte_panic``.
 
+* eal: Making ``struct rte_intr_handle`` internal to avoid any ABI breakages
+  in future.
+
 * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
   not allow for writing optimized code for all the CPU architectures supported
   in DPDK. DPDK has adopted the atomic operations from
-- 
2.18.0


^ permalink raw reply	[relevance 10%]

* [dpdk-dev] [PATCH v12 00/10] eal: Add EAL API for threading
  2021-07-30 22:31  3% ` [dpdk-dev] [PATCH v11 00/10] " Narcisa Ana Maria Vasile
@ 2021-08-02 17:32  3%   ` Narcisa Ana Maria Vasile
  2021-08-03 19:01  3%     ` [dpdk-dev] [PATCH v13 " Narcisa Ana Maria Vasile
  0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-08-02 17:32 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

EAL thread API

**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.

**Goals**
* Introduce a generic EAL API for threading support that will remove
  the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
  RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
  3rd party thread library through a configuration option.

**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)

**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();

lib/librte_eal/common/rte_thread.c
int rte_thread_create() 
{
	return pthread_create();
}

lib/librte_eal/windows/rte_thread.c
int rte_thread_create() 
{
	return CreateThread();
}
-----------------------------------------------------

**Thread attributes**

When or after a thread is created, specific characteristics of the thread
can be adjusted. Given that the thread characteristics that are of interest
for DPDK applications are affinity and priority, the following structure
that represents thread attributes has been defined:

typedef struct
{
	enum rte_thread_priority priority;
	rte_cpuset_t cpuset;
} rte_thread_attr_t;

The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.

*Priority* is represented through an enum that currently advertises
two values for priority:
	- RTE_THREAD_PRIORITY_NORMAL
	- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority      - sets the priority of a thread
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
                               with a new value for priority

The user can choose thread priority through an EAL parameter,
when starting an application.  If EAL parameter is not used,
the per-platform default value for thread priority is used.
Otherwise administrator has an option to set one of available options:
 --thread-prio normal
 --thread-prio realtime

Example:
./dpdk-l2fwd -l 0-3 -n 4 –thread-prio normal -- -q 8 -p ffff

*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
                                   rte_thread_attr_t object
rte_thread_set/get_affinity      – sets/gets the affinity of a thread

**Errors**
A translation function that maps Windows error codes to errno-style
error codes is provided. 

**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Add support for pthread_mutex_trylock
* Additional functionality offered by pthread_*
  (such as pthread_setname_np, etc.)

v12:
 - Fix freebsd warning about initializer in unit tests

v11:
 - Add unit tests for thread API
 - Rebase

v10:
 - Remove patch no. 10. It will be broken down in subpatches 
   and sent as a different patchset that depends on this one.
   This is done due to the ABI breaks that would be caused by patch 10.
 - Replace unix/rte_thread.c with common/rte_thread.c
 - Remove initializations that may prevent compiler from issuing useful
   warnings.
 - Remove rte_thread_types.h and rte_windows_thread_types.h
 - Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
 - Remove functions that retrieves thread handle from process handle
 - Remove rte_thread_cancel() until same behavior is obtained on
   all platforms.
 - Fix rte_thread_detach() function description,
   return value and remove empty line.
 - Reimplement mutex functions. Add compatible representation for mutex
   identifier. Add macro to replace static mutex initialization instances.
 - Fix commit messages (lines too long, remove unicode symbols)

v9:
- Sign patches

v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value

v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.

v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()

v5:
- update cover letter with more details on the priority argument

v4:
- fix function description
- rebase

v3:
- rebase

v2:
- revert changes that break ABI 
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c


Narcisa Vasile (10):
  eal: add basic threading functions
  eal: add thread attributes
  eal/windows: translate Windows errors to errno-style errors
  eal: implement functions for thread affinity management
  eal: implement thread priority management functions
  eal: add thread lifetime management
  eal: implement functions for mutex management
  eal: implement functions for thread barrier management
  eal: add EAL argument for setting thread priority
  Add unit tests for thread API

 app/test/meson.build                |   2 +
 app/test/test_threads.c             | 419 ++++++++++++++++++++
 lib/eal/common/eal_common_options.c |  28 +-
 lib/eal/common/eal_internal_cfg.h   |   2 +
 lib/eal/common/eal_options.h        |   2 +
 lib/eal/common/meson.build          |   1 +
 lib/eal/common/rte_thread.c         | 445 +++++++++++++++++++++
 lib/eal/include/rte_thread.h        | 406 ++++++++++++++++++-
 lib/eal/unix/meson.build            |   1 -
 lib/eal/unix/rte_thread.c           |  92 -----
 lib/eal/version.map                 |  20 +
 lib/eal/windows/eal_lcore.c         | 176 ++++++---
 lib/eal/windows/eal_windows.h       |  10 +
 lib/eal/windows/include/sched.h     |   2 +-
 lib/eal/windows/rte_thread.c        | 588 ++++++++++++++++++++++++++--
 15 files changed, 2020 insertions(+), 174 deletions(-)
 create mode 100644 app/test/test_threads.c
 create mode 100644 lib/eal/common/rte_thread.c
 delete mode 100644 lib/eal/unix/rte_thread.c

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [EXT] Re: [PATCH v4] doc: announce API changes for Windows compatibility
  2021-08-02 13:48  0%           ` Akhil Goyal
  2021-08-02 14:57  0%             ` Tal Shnaiderman
@ 2021-08-02 17:46  0%             ` Thomas Monjalon
  1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-02 17:46 UTC (permalink / raw)
  To: Dmitry Kozlyuk
  Cc: dev, Ferruh Yigit, Fiona Trahe, Khoa To, Ray Kinsella,
	andrew.rybchenko, olivier.matz, navasile, pallavi.kadam,
	ranjit.menon, bruce.richardson, stephen, Akhil Goyal

02/08/2021 15:48, Akhil Goyal:
> > 2021-08-02 12:45 (UTC+0000), Akhil Goyal:
> > > > 21/07/2021 21:55, Dmitry Kozlyuk:
> > > > > Windows headers define `s_addr`, `min`, and `max` as macros.
> > > > > If DPDK headers are included after Windows ones, DPDK structure
> > > > > definitions containing fields with these names get broken (example 1),
> > > > > as well as any usage of such fields (example 2). If DPDK headers
> > > > > undefined these macros, it could break consumer code (example 3).
> > > > > It is proposed to rename structure fields in DPDK, because Win32
> > headers
> > > > > are used more widely than DPDK, as a general-purpose platform
> > compared
> > > > > to domain-specific kit, and are harder to fix because of that.
> > > > > Exact new names are left for further discussion.
> > > > >
> > > > > Example 1:
> > > > >
> > > > >     /* in DPDK public header included after windows.h */
> > > > >     struct rte_type {
> > > > >         int min;    /* ERROR: `min` is a macro */
> > > > >     };
> > > > >
> > > > > Example 2:
> > > > >
> > > > >     #include <rte_ether.h>
> > > > >     #include <winsock2.h>
> > > > >     struct rte_ether_hdr eh;
> > > > >     eh.s_addr.addr_bytes[0] = 0;    /* ERROR: `addr_s` is a macro */
> > > > >
> > > > > Example 3:
> > > > >
> > > > >     #include <winsock2.h>
> > > > >     #include <rte_ether.h>
> > > > >     struct in_addr addr;
> > > > >     addr.s_addr = 0;      /* ERROR: there is no `s_addr` field,
> > > > >                              and `s_addr` macro is undefined by DPDK. */
> > > > >
> > > > > Commit 6c068dbd9fea ("net: work around s_addr macro on Windows")
> > > > > modified definition of `struct rte_ether_hdr` to avoid the issue.
> > > > > However, the workaround assumes `#define s_addr S_addr.S_un`
> > > > > in Windows headers, which is not a part of official API.
> > > > > It also complicates the definition of `struct rte_ether_hdr`.
> > > > >
> > > > > Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
> > > > > Acked-by: Khoa To <khot@microsoft.com>
[...]
> > > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > >
> > > Can we have a local variable named as min/max?
> > > If not, then I believe it is not a good idea.
> > 
> > Yes, except for inline functions in public headers.
> > The only problematic one I know of is this (rte_lru_x86.h):
> > 
> > static inline int
> > f_lru_pos(uint64_t lru_list)
> > {
> > 	__m128i lst = _mm_set_epi64x((uint64_t)-1, lru_list);
> > 	__m128i min = _mm_minpos_epu16(lst); /* <<< */
> > 	return _mm_extract_epi16(min, 1);
> > }
> > 
> > Fixing it breaks neither API nor ABI, thus no explicit deprecation notice.
> OK,
> Acked-by: Akhil Goyal <gakhil@marvell.com>

Applied, thanks.




^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal
  2021-08-02 16:03 10% [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal Harman Kalra
@ 2021-08-02 19:20  0% ` Andrew Rybchenko
  2021-08-03  2:37  0% ` Xia, Chenbo
  1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-08-02 19:20 UTC (permalink / raw)
  To: Harman Kalra, jerinj, david.marchand, thomas, Ray Kinsella; +Cc: dev

On 8/2/21 7:03 PM, Harman Kalra wrote:
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
> 
> Discussion thread:
> https://mails.dpdk.org/archives/dev/2021-March/202959.html
> 
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://docs.google.com/spreadsheets/d/1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE/edit#gid=0
> 
> Signed-off-by: Harman Kalra <hkalra@marvell.com>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v2] devtools: test different build types
    @ 2021-08-02 22:45 23% ` Thomas Monjalon
    2 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-02 22:45 UTC (permalink / raw)
  To: dev; +Cc: Andrew Rybchenko, Bruce Richardson

All builds were of type debugoptimized.
It is kept only for builds having an ABI check.
Others will have the default build type (release),
except if specified differently as in the x86 generic build
which will be a test of the non-optimized debug build type.
Some static builds will test the minsize build type.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
v2: fix init of var buildtype
---
 devtools/test-meson-builds.sh | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 9ec8e2bc7e..7bd305a669 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -92,13 +92,16 @@ load_env () # <target compiler>
 	command -v $targetcc >/dev/null 2>&1 || return 1
 }
 
-config () # <dir> <builddir> <meson options>
+config () # <dir> <builddir> <ABI check> <meson options>
 {
 	dir=$1
 	shift
 	builddir=$1
 	shift
+	abicheck=$1
+	shift
 	if [ -f "$builddir/build.ninja" ] ; then
+		[ $abicheck = ABI ] || return 0
 		# for existing environments, switch to debugoptimized if unset
 		# so that ABI checks can run
 		if ! $MESON configure $builddir |
@@ -114,7 +117,9 @@ config () # <dir> <builddir> <meson options>
 	else
 		options="$options -Dexamples=l3fwd" # save disk space
 	fi
-	options="$options --buildtype=debugoptimized"
+	if [ $abicheck = ABI ] ; then
+		options="$options --buildtype=debugoptimized"
+	fi
 	for option in $DPDK_MESON_OPTIONS ; do
 		options="$options -D$option"
 	done
@@ -165,7 +170,7 @@ build () # <directory> <target cc | cross file> <ABI check> [meson options]
 		cross=
 	fi
 	load_env $targetcc || return 0
-	config $srcdir $builds_dir/$targetdir $cross --werror $*
+	config $srcdir $builds_dir/$targetdir $abicheck $cross --werror $*
 	compile $builds_dir/$targetdir
 	if [ -n "$DPDK_ABI_REF_VERSION" -a "$abicheck" = ABI ] ; then
 		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
@@ -179,7 +184,7 @@ build () # <directory> <target cc | cross file> <ABI check> [meson options]
 			fi
 
 			rm -rf $abirefdir/build
-			config $abirefdir/src $abirefdir/build $cross \
+			config $abirefdir/src $abirefdir/build $abicheck $cross \
 				-Dexamples= $*
 			compile $abirefdir/build
 			install_target $abirefdir/build $abirefdir/$targetdir
@@ -211,11 +216,13 @@ for c in gcc clang ; do
 	for s in static shared ; do
 		if [ $s = shared ] ; then
 			abicheck=ABI
+			buildtype=
 		else
 			abicheck=skipABI # save time and disk space
+			buildtype='--buildtype=minsize'
 		fi
 		export CC="$CCACHE $c"
-		build build-$c-$s $c $abicheck --default-library=$s
+		build build-$c-$s $c $abicheck $buildtype --default-library=$s
 		unset CC
 	done
 done
@@ -227,7 +234,7 @@ generic_isa='nehalem'
 if ! check_cc_flags "-march=$generic_isa" ; then
 	generic_isa='corei7'
 fi
-build build-x86-generic cc skipABI -Dcheck_includes=true \
+build build-x86-generic cc skipABI --buildtype=debug -Dcheck_includes=true \
 	-Dlibdir=lib -Dcpu_instruction_set=$generic_isa $use_shared
 
 # 32-bit with default compiler
-- 
2.31.1


^ permalink raw reply	[relevance 23%]

* Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
  2021-07-31 20:44  0%         ` Thomas Monjalon
@ 2021-08-03  1:52  0%           ` Xia, Chenbo
  2021-08-03  8:19  0%             ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Xia, Chenbo @ 2021-08-03  1:52 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, Yigit, Ferruh, dev, mdr, david.marchand, Richardson, Bruce,
	andrew.rybchenko, Ananyev, Konstantin

Hi Thomas,

> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Sunday, August 1, 2021 4:44 AM
> To: Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Yigit, Ferruh <ferruh.yigit@intel.com>; dev@dpdk.org;
> mdr@ashroe.eu; david.marchand@redhat.com; Richardson, Bruce
> <bruce.richardson@intel.com>; andrew.rybchenko@oktetlabs.ru; Ananyev,
> Konstantin <konstantin.ananyev@intel.com>
> Subject: Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus
> driver
> 
> 27/07/2021 10:44, Bruce Richardson:
> > On Mon, Jul 26, 2021 at 05:56:17AM +0000, Xia, Chenbo wrote:
> > > From: Yigit, Ferruh <ferruh.yigit@intel.com>
> > > > On 7/23/2021 8:39 AM, Xia, Chenbo wrote:
> > > > > From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
> > > > >> +* pci: To reduce unnecessary ABIs exposed by DPDK bus driver,
> > > > "rte_bus_pci.h"
> > > > >> +  will be made internal in 21.11 and macros/data
> structures/functions
> > > > defined
> > > > >> +  in the header will not be considered as ABI anymore. This change
> is
> > > > >> inspired
> > > > >> +  by the RFC
> > > > https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
> > > > >
> > > > > I see there's some ABI improvement work on-going and I think it could
> be
> > > > part of
> > > > > the work. If it makes sense to you, I'd like some ACKs.
> > > > >
> > > >
> > > > Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > > >
> > > > I am for reducing the public ABI as much as possible. How big will the
> > > > change
> > > > be? Is the 'rte_bus_pci.h' used other than './drivers/bus/pci/'?
> > >
> > > I don't see big change here. And I am not sure if I understand your second
> > > question. The rte_bus_pci.h will still be used by drivers (maybe remove
> the
> > > rte prefix and change the file name).
> > >
> > The file itself will still be exported in some cases, where the end-user
> > has their own drivers which need to be compiled, so I'd recommend keeping
> > the rte_ prefix. However, I think making all bus APIs internal-only to DPDK
> > is a good idea.
> 
> I don't understand how it can exported _and_ internal.

I think we can use the meson option 'enable_driver_sdk'. The first use case is in
lib ethdev for exporting internal APIs for out-of-tree drivers. For pci bus, I
think the use case is similar: users who want to build out-of-tree drivers can
set the option true to export pci header but the structs/functions are marked
internal. Make sense to you?

Thanks,
Chenbo

> And about the rte_ prefix, it should be kept even if it used only
> in internal drivers because it prevent from namespace clash with other
> libraries included by the drivers.
> As a rule we should always have rte_ prefix for each symbol used outside
> of its own library.
> 
> That said I am OK with the direction of hiding PCI bus API.
> 
> Applied, thanks.
> 
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce security API changes for Inline IPsec
  2021-07-30 22:16  3% ` Thomas Monjalon
@ 2021-08-03  2:11  3%   ` Nithin Dabilpuram
  0 siblings, 0 replies; 200+ results
From: Nithin Dabilpuram @ 2021-08-03  2:11 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: konstantin.ananyev, jerinj, gakhil, roy.fan.zhang,
	hemant.agrawal, matan, dev, ferruh.yigit, bruce.richardson, mdr,
	david.marchand

On Sat, Jul 31, 2021 at 12:16:12AM +0200, Thomas Monjalon wrote:
> 27/07/2021 19:36, Nithin Dabilpuram:
> > Announce changes to make rte_security_set_pkt_metadata() and
> > rte_security_get_userdata() inline instead of C functions and
> > also addition of another field in structure rte_security_ctx for
> > holding flags.
> 
> I guess there is a performance reason but the motivation
> is not explained. Also it is going in the opposite direction
> of what is discussed in the Technical Board meetings:
> we should avoid and reduce the number of inline functions
> to reduce the ABI surface.

Yes, it is a performance improvement. It is discussed in detail in
https://inbox.dpdk.org/dev/20210624102848.3878788-1-gakhil@marvell.com/T/#mc4ba3500c024f9911b7af7e5a6e95e23f6197fdd

To summarize, initially the two per-pkt fast path API's rte_security_set_pkt_metadata()
and rte_security_get_userdata() where added with anticipation that PMD's would
have lot of processing to be done on per-pkt basis for security offload packets
unlike other ethdev Rx/Tx offloads. 

Now that we have few PMD's that implemented inline ipsec support, it looks more
benefitial to have PMD specific logic in tx_burst()/rx_burst() for
performance instead of doing a per-pkt function ptr jump to do the same in
rte_security_set_pkt_metadata() or rte_security_get_userdata(). 
In our PMD rte_security_set_pkt_metadata() is currently just to copy private SA ptr 
from rte_security_session to security mbuf dynamic field and rte_security_get_userdata()
is to copy userdata ptr from mbuf dynamic field.

Hence the above proposal provides an alternative to PMD's which want to avoid 
function ptr jump, by doing a simple metadata get/set to mbuf security dynamic
field apart from existing function ptr jump. 

Also, in future when there will be no PMD's that need the function ptr support
for the same operations, this new method can be made the only method and rest
of the function pointer jump logic can be removed probably without breaking ABI.

> 
> 

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal
  2021-08-02 16:03 10% [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal Harman Kalra
  2021-08-02 19:20  0% ` Andrew Rybchenko
@ 2021-08-03  2:37  0% ` Xia, Chenbo
  2021-08-03  4:05  0%   ` Jerin Jacob
  1 sibling, 1 reply; 200+ results
From: Xia, Chenbo @ 2021-08-03  2:37 UTC (permalink / raw)
  To: Harman Kalra, jerinj, david.marchand, thomas, Ray Kinsella; +Cc: dev

> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> Sent: Tuesday, August 3, 2021 12:04 AM
> To: jerinj@marvell.com; david.marchand@redhat.com; thomas@monjalon.net; Ray
> Kinsella <mdr@ashroe.eu>
> Cc: dev@dpdk.org; Harman Kalra <hkalra@marvell.com>
> Subject: [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal
> 
> Moving struct rte_intr_handle as an internal structure to
> avoid any ABI breakages in future. Since this structure defines
> some static arrays and changing respective macros breaks the ABI.
> Eg:
> Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> MSI-X interrupts that can be defined for a PCI device, while PCI
> specification allows maximum 2048 MSI-X interrupts that can be used.
> If some PCI device requires more than 512 vectors, either change the
> RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> PCI device MSI-X size on probe time. Either way its an ABI breakage.
> 
> Discussion thread:
> https://mails.dpdk.org/archives/dev/2021-March/202959.html
> 
> Change already included in 21.11 ABI improvement spreadsheet (item 42):
> https://docs.google.com/spreadsheets/d/1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9U
> xeyfE/edit#gid=0
> 
> Signed-off-by: Harman Kalra <hkalra@marvell.com>
> ---
>  doc/guides/rel_notes/deprecation.rst | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/doc/guides/rel_notes/deprecation.rst
> b/doc/guides/rel_notes/deprecation.rst
> index d9c0e65921..e95574b1ec 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -17,6 +17,9 @@ Deprecation Notices
>  * eal: The function ``rte_eal_remote_launch`` will return new error codes
>    after read or write error on the pipe, instead of calling ``rte_panic``.
> 
> +* eal: Making ``struct rte_intr_handle`` internal to avoid any ABI breakages
> +  in future.
> +
>  * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
>    not allow for writing optimized code for all the CPU architectures
> supported
>    in DPDK. DPDK has adopted the atomic operations from
> --
> 2.18.0

Acked-by: Chenbo Xia <chenbo.xia@intel.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal
  2021-08-03  2:37  0% ` Xia, Chenbo
@ 2021-08-03  4:05  0%   ` Jerin Jacob
  2021-08-04 14:22  0%     ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-08-03  4:05 UTC (permalink / raw)
  To: Xia, Chenbo
  Cc: Harman Kalra, jerinj, david.marchand, thomas, Ray Kinsella, dev

On Tue, Aug 3, 2021 at 8:07 AM Xia, Chenbo <chenbo.xia@intel.com> wrote:
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Harman Kalra
> > Sent: Tuesday, August 3, 2021 12:04 AM
> > To: jerinj@marvell.com; david.marchand@redhat.com; thomas@monjalon.net; Ray
> > Kinsella <mdr@ashroe.eu>
> > Cc: dev@dpdk.org; Harman Kalra <hkalra@marvell.com>
> > Subject: [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal
> >
> > Moving struct rte_intr_handle as an internal structure to
> > avoid any ABI breakages in future. Since this structure defines
> > some static arrays and changing respective macros breaks the ABI.
> > Eg:
> > Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> > MSI-X interrupts that can be defined for a PCI device, while PCI
> > specification allows maximum 2048 MSI-X interrupts that can be used.
> > If some PCI device requires more than 512 vectors, either change the
> > RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> > PCI device MSI-X size on probe time. Either way its an ABI breakage.
> >
> > Discussion thread:
> > https://mails.dpdk.org/archives/dev/2021-March/202959.html
> >
> > Change already included in 21.11 ABI improvement spreadsheet (item 42):
> > https://docs.google.com/spreadsheets/d/1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9U
> > xeyfE/edit#gid=0
> >
> > Signed-off-by: Harman Kalra <hkalra@marvell.com>
> > ---
> >  doc/guides/rel_notes/deprecation.rst | 3 +++
> >  1 file changed, 3 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index d9c0e65921..e95574b1ec 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -17,6 +17,9 @@ Deprecation Notices
> >  * eal: The function ``rte_eal_remote_launch`` will return new error codes
> >    after read or write error on the pipe, instead of calling ``rte_panic``.
> >
> > +* eal: Making ``struct rte_intr_handle`` internal to avoid any ABI breakages
> > +  in future.
> > +
> >  * rte_atomicNN_xxx: These APIs do not take memory order parameter. This does
> >    not allow for writing optimized code for all the CPU architectures
> > supported
> >    in DPDK. DPDK has adopted the atomic operations from
> > --
> > 2.18.0
>
> Acked-by: Chenbo Xia <chenbo.xia@intel.com>

Acked-by: Jerin Jacob <jerinj@marvell.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2] doc: announce changes to eventdev library
  @ 2021-08-03  4:12  3%   ` Jerin Jacob
  2021-08-03  8:32  0%     ` Mattias Rönnblom
                       ` (3 more replies)
  0 siblings, 4 replies; 200+ results
From: Jerin Jacob @ 2021-08-03  4:12 UTC (permalink / raw)
  To: Pavan Nikhilesh, Gujjar, Abhinandan S, Erik Gabriel Carrillo,
	Van Haaren, Harry, Hemant Agrawal, McDaniel, Timothy, Liang Ma,
	Jayatheerthan, Jay
  Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Mattias Rönnblom,
	Thomas Monjalon

On Tue, Aug 3, 2021 at 2:46 AM <pbhagavatula@marvell.com> wrote:
>
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>
> Make driver layer as internal, remove unnecessary rte_ prefix for
> structures and functions that are not a part of public API.
> Promote experimental trace and vector APIs to stable.
> Add reserved field to `rte_event_timer` structure.
>
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

Acked-by: Jerin Jacob <jerinj@marvell.com>


++ Eventdev driver Maintainers.

This list is based on items identified for 21.11 ABI improvement at
https://docs.google.com/spreadsheets/d/1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE/edit#gid=0


> ---
>  v2 Changes:
>  - Fix build issues.
>
>  doc/guides/rel_notes/deprecation.rst | 11 +++++++++++
>  1 file changed, 11 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index d9c0e65921..6ac321eb1e 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -158,3 +158,14 @@ Deprecation Notices
>  * security: The functions ``rte_security_set_pkt_metadata`` and
>    ``rte_security_get_userdata`` will be made inline functions and additional
>    flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
> +
> +* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
> +  to make the driver interface as internal and the structures ``rte_eventdev_data``,
> +  ``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file named
> +  ``rte_eventdev_core.h`` in DPDK 21.11.
> +  The ``rte_`` prefix for internal structures and functions will be removed across the
> +  library.
> +  The experimental eventdev trace APIs and ``rte_event_vector_pool_create``,
> +  ``rte_event_eth_rx_adapter_vector_limits_get`` will be promoted to stable.
> +  An 8byte reserved field will be added to the structure ``rte_event_timer`` to
> +  support future extensions.
> --
> 2.17.1
>

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver
  2021-08-03  1:52  0%           ` Xia, Chenbo
@ 2021-08-03  8:19  0%             ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-03  8:19 UTC (permalink / raw)
  To: Xia, Chenbo
  Cc: dev, Yigit, Ferruh, dev, mdr, david.marchand, Richardson, Bruce,
	andrew.rybchenko, Ananyev, Konstantin

03/08/2021 03:52, Xia, Chenbo:
> Hi Thomas,
> 
> From: Thomas Monjalon <thomas@monjalon.net>
> > 27/07/2021 10:44, Bruce Richardson:
> > > On Mon, Jul 26, 2021 at 05:56:17AM +0000, Xia, Chenbo wrote:
> > > > From: Yigit, Ferruh <ferruh.yigit@intel.com>
> > > > > On 7/23/2021 8:39 AM, Xia, Chenbo wrote:
> > > > > > From: dev <dev-bounces@dpdk.org> On Behalf Of Chenbo Xia
> > > > > >> +* pci: To reduce unnecessary ABIs exposed by DPDK bus driver,
> > > > > "rte_bus_pci.h"
> > > > > >> +  will be made internal in 21.11 and macros/data
> > structures/functions
> > > > > defined
> > > > > >> +  in the header will not be considered as ABI anymore. This change
> > is
> > > > > >> inspired
> > > > > >> +  by the RFC
> > > > > https://patchwork.dpdk.org/project/dpdk/list/?series=17176.
> > > > > >
> > > > > > I see there's some ABI improvement work on-going and I think it could
> > be
> > > > > part of
> > > > > > the work. If it makes sense to you, I'd like some ACKs.
> > > > > >
> > > > >
> > > > > Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > > > >
> > > > > I am for reducing the public ABI as much as possible. How big will the
> > > > > change
> > > > > be? Is the 'rte_bus_pci.h' used other than './drivers/bus/pci/'?
> > > >
> > > > I don't see big change here. And I am not sure if I understand your second
> > > > question. The rte_bus_pci.h will still be used by drivers (maybe remove
> > the
> > > > rte prefix and change the file name).
> > > >
> > > The file itself will still be exported in some cases, where the end-user
> > > has their own drivers which need to be compiled, so I'd recommend keeping
> > > the rte_ prefix. However, I think making all bus APIs internal-only to DPDK
> > > is a good idea.
> > 
> > I don't understand how it can exported _and_ internal.
> 
> I think we can use the meson option 'enable_driver_sdk'. The first use case is in
> lib ethdev for exporting internal APIs for out-of-tree drivers. For pci bus, I
> think the use case is similar: users who want to build out-of-tree drivers can
> set the option true to export pci header but the structs/functions are marked
> internal. Make sense to you?

I understand the intent.
You are saying an out-of-tree driver is considered internal.
Let's see how it works for real.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2] doc: announce changes to eventdev library
  2021-08-03  4:12  3%   ` Jerin Jacob
@ 2021-08-03  8:32  0%     ` Mattias Rönnblom
  2021-08-04  5:57  0%     ` Jayatheerthan, Jay
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2021-08-03  8:32 UTC (permalink / raw)
  To: Jerin Jacob, Pavan Nikhilesh, Gujjar, Abhinandan S,
	Erik Gabriel Carrillo, Van Haaren, Harry, Hemant Agrawal,
	McDaniel, Timothy, Liang Ma, Jayatheerthan, Jay
  Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, Thomas Monjalon

On 2021-08-03 06:12, Jerin Jacob wrote:
> On Tue, Aug 3, 2021 at 2:46 AM <pbhagavatula@marvell.com> wrote:
>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>
>> Make driver layer as internal, remove unnecessary rte_ prefix for
>> structures and functions that are not a part of public API.
>> Promote experimental trace and vector APIs to stable.
>> Add reserved field to `rte_event_timer` structure.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
>
>
> ++ Eventdev driver Maintainers.
>
> This list is based on items identified for 21.11 ABI improvement at
> https://protect2.fireeye.com/v1/url?k=bb3a87ff-e4a1bf2d-bb3ac764-866132fe445e-d427d33ed389149e&q=1&e=db41f48a-6628-48aa-93d1-3190b8a53257&u=https%3A%2F%2Fdocs.google.com%2Fspreadsheets%2Fd%2F1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE%2Fedit%23gid%3D0
>

Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>


>> ---
>>   v2 Changes:
>>   - Fix build issues.
>>
>>   doc/guides/rel_notes/deprecation.rst | 11 +++++++++++
>>   1 file changed, 11 insertions(+)
>>
>> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
>> index d9c0e65921..6ac321eb1e 100644
>> --- a/doc/guides/rel_notes/deprecation.rst
>> +++ b/doc/guides/rel_notes/deprecation.rst
>> @@ -158,3 +158,14 @@ Deprecation Notices
>>   * security: The functions ``rte_security_set_pkt_metadata`` and
>>     ``rte_security_get_userdata`` will be made inline functions and additional
>>     flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
>> +
>> +* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
>> +  to make the driver interface as internal and the structures ``rte_eventdev_data``,
>> +  ``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file named
>> +  ``rte_eventdev_core.h`` in DPDK 21.11.
>> +  The ``rte_`` prefix for internal structures and functions will be removed across the
>> +  library.
>> +  The experimental eventdev trace APIs and ``rte_event_vector_pool_create``,
>> +  ``rte_event_eth_rx_adapter_vector_limits_get`` will be promoted to stable.
>> +  An 8byte reserved field will be added to the structure ``rte_event_timer`` to
>> +  support future extensions.
>> --
>> 2.17.1
>>


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] doc: announce cryptodev-PMD interface as internal
@ 2021-08-03 11:44  3% Akhil Goyal
  2021-08-03 19:25  0% ` Ajit Khaparde
  0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-08-03 11:44 UTC (permalink / raw)
  To: dev
  Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
	konstantin.ananyev, thomas, roy.fan.zhang, asomalap,
	ruifeng.wang, ajit.khaparde, pablo.de.lara.guarch, fiona.trahe,
	adwivedi, michaelsh, rnagadheeraj, jianjay.zhou, Akhil Goyal

The APIs which are internal to PMD and cryptodev library
can be marked as internal so that ABI checking do not
shout for changes in APIs which are internal to DPDK.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 doc/guides/rel_notes/deprecation.rst | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 6a35c7649a..f81bd87f10 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -148,6 +148,9 @@ Deprecation Notices
   content. On Linux and FreeBSD, supported prior to DPDK 20.11,
   original structure will be kept until DPDK 21.11.
 
+* cryptodev: The APIs for interfacing between library and PMD will be marked
+  as internal APIs in DPDK 21.11.
+
 * security: The functions ``rte_security_set_pkt_metadata`` and
   ``rte_security_get_userdata`` will be made inline functions and additional
   flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v3] doc: policy on the promotion of experimental APIs
  @ 2021-08-03 14:12  3%         ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-08-03 14:12 UTC (permalink / raw)
  To: Jerin Jacob, Tyler Retzlaff
  Cc: dpdk-dev, Richardson, Bruce, John McNamara, Ferruh Yigit,
	Thomas Monjalon, David Marchand, Stephen Hemminger



On 11/07/2021 08:22, Jerin Jacob wrote:
> On Sat, Jul 10, 2021 at 12:46 AM Tyler Retzlaff
> <roretzla@linux.microsoft.com> wrote:
>>
>> On Fri, Jul 09, 2021 at 11:46:54AM +0530, Jerin Jacob wrote:
>>>> +
>>>> +Promotion to stable
>>>> +~~~~~~~~~~~~~~~~~~~
>>>> +
>>>> +Ordinarily APIs marked as ``experimental`` will be promoted to the stable ABI
>>>> +once a maintainer and/or the original contributor is satisfied that the API is
>>>> +reasonably mature. In exceptional circumstances, should an API still be
>>>
>>> Is this line with git commit message?
>>> Why making an exceptional case? why not make it stable after two years
>>> or remove it.
>>> My worry is if we make an exception case, it will be difficult to
>>> enumerate the exception case.
>>
>> i think the intent here is to indicate that an api/abi doesn't just
>> automatically become stable after a period of time.  there also has to
>> be an evaluation by the maintainer / community before making it stable.
>>
>> so i guess the timer is something that should force that evaluation. as
>> a part of that evaluation one would imagine there is justification for
>> keeping the api as experimental for longer and if so a rationale as to
>> why.
> 
> I think, we need to have a deadline. Probably one year timer for evaluation and
> two year for max time for decision to make it as stable or remove.
> 

Tyler is correct here (sorry for the delay I was out on vacation). 
In my usage of the word exception - I was conveying that an API aging or timing out should be an exceptional event.
What I am hoping will happen in the 90%-ile of cases is conveyed in the previous line. 

"Ordinarily APIs marked as ``experimental`` will be promoted to the stable ABI
once a maintainer and/or the original contributor is satisfied that the API is
reasonably mature."

i.e. that the symbol has be pro-actively managed with the maintainer and original author deciding when to promote.

I will add a line to indicate that experimental apis should be reviewed after one year. 

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v4] doc: policy on the promotion of experimental APIs
    @ 2021-08-03 16:44 23% ` Ray Kinsella
  2021-08-04  9:34 23% ` [dpdk-dev] [PATCH v5] " Ray Kinsella
  2 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-03 16:44 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, john.mcnamara, roretzla, ferruh.yigit, thomas,
	david.marchand, stephen, jerinjacobk, Ray Kinsella

Clarifying the ABI policy on the promotion of experimental APIS to stable.
We have a fair number of APIs that have been experimental for more than
2 years. This policy amendment indicates that these APIs should be
promoted or removed, or should at least form a conservation between the
maintainer and original contributor.

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
v2: addressing comments on abi expiry from Tyler Retzlaff.
v3: addressing typos in the git commit message
v4: addressing typos and comments by Jerin Jacob

 doc/guides/contributing/abi_policy.rst | 25 ++++++++++++++++++++++---
 1 file changed, 22 insertions(+), 3 deletions(-)

diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
index 4ad87dbfed..1acd12cbf4 100644
--- a/doc/guides/contributing/abi_policy.rst
+++ b/doc/guides/contributing/abi_policy.rst
@@ -26,9 +26,10 @@ General Guidelines
    symbols is managed with :ref:`ABI Versioning <abi_versioning>`.
 #. The removal of symbols is considered an :ref:`ABI breakage <abi_breakages>`,
    once approved these will form part of the next ABI version.
-#. Libraries or APIs marked as :ref:`experimental <experimental_apis>` may
-   be changed or removed without prior notice, as they are not considered part
-   of an ABI version.
+#. Libraries or APIs marked as :ref:`experimental <experimental_apis>` may be
+   changed or removed without prior notice, as they are not considered part of
+   an ABI version. The :ref:`experimental <experimental_apis>` status of an API
+   is not an indefinite state.
 #. Updates to the :ref:`minimum hardware requirements <hw_rqmts>`, which drop
    support for hardware which was previously supported, should be treated as an
    ABI change.
@@ -358,3 +359,21 @@ Libraries
 Libraries marked as ``experimental`` are entirely not considered part of an ABI
 version.
 All functions in such libraries may be changed or removed without prior notice.
+
+Promotion to stable
+~~~~~~~~~~~~~~~~~~~
+
+An API's ``experimental`` status should be reviewed annually, by both the
+maintainer and/or the original contributor. Ordinarily APIs marked as
+``experimental`` will be promoted to the stable ABI once a maintainer has become
+satisfied that the API is mature and is unlikely to change.
+
+In exceptional circumstances, should an API still be classified as
+``experimental`` after two years and is without any prospect of becoming part of
+the stable API. The API will then become a candidate for removal, to avoid the
+acculumation of abandoned symbols.
+
+Should an API's Binary Interface change, usually due to a direct change to the
+API's signature, it is reasonable for the review and expiry clocks to reset. The
+promotion or removal of symbols will typically form part of a conversation
+between the maintainer and the original contributor.
-- 
2.26.2


^ permalink raw reply	[relevance 23%]

* [dpdk-dev] [PATCH v13 00/10] eal: Add EAL API for threading
  2021-08-02 17:32  3%   ` [dpdk-dev] [PATCH v12 " Narcisa Ana Maria Vasile
@ 2021-08-03 19:01  3%     ` Narcisa Ana Maria Vasile
  2021-08-19 21:31  3%       ` [dpdk-dev] [PATCH v14 0/9] " Narcisa Ana Maria Vasile
  0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-08-03 19:01 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

EAL thread API

**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.

**Goals**
* Introduce a generic EAL API for threading support that will remove
  the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
  RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
  3rd party thread library through a configuration option.

**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)

**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();

lib/librte_eal/common/rte_thread.c
int rte_thread_create() 
{
	return pthread_create();
}

lib/librte_eal/windows/rte_thread.c
int rte_thread_create() 
{
	return CreateThread();
}
-----------------------------------------------------

**Thread attributes**

When or after a thread is created, specific characteristics of the thread
can be adjusted. Given that the thread characteristics that are of interest
for DPDK applications are affinity and priority, the following structure
that represents thread attributes has been defined:

typedef struct
{
	enum rte_thread_priority priority;
	rte_cpuset_t cpuset;
} rte_thread_attr_t;

The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.

*Priority* is represented through an enum that currently advertises
two values for priority:
	- RTE_THREAD_PRIORITY_NORMAL
	- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority      - sets the priority of a thread
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
                               with a new value for priority

The user can choose thread priority through an EAL parameter,
when starting an application.  If EAL parameter is not used,
the per-platform default value for thread priority is used.
Otherwise administrator has an option to set one of available options:
 --thread-prio normal
 --thread-prio realtime

Example:
./dpdk-l2fwd -l 0-3 -n 4 –thread-prio normal -- -q 8 -p ffff

*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
                                   rte_thread_attr_t object
rte_thread_set/get_affinity      – sets/gets the affinity of a thread

**Errors**
A translation function that maps Windows error codes to errno-style
error codes is provided. 

**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Add support for pthread_mutex_trylock
* Additional functionality offered by pthread_*
  (such as pthread_setname_np, etc.)

v13:
 - Fix syntax error in unit tests

v12:
 - Fix freebsd warning about initializer in unit tests

v11:
 - Add unit tests for thread API
 - Rebase

v10:
 - Remove patch no. 10. It will be broken down in subpatches 
   and sent as a different patchset that depends on this one.
   This is done due to the ABI breaks that would be caused by patch 10.
 - Replace unix/rte_thread.c with common/rte_thread.c
 - Remove initializations that may prevent compiler from issuing useful
   warnings.
 - Remove rte_thread_types.h and rte_windows_thread_types.h
 - Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
 - Remove functions that retrieves thread handle from process handle
 - Remove rte_thread_cancel() until same behavior is obtained on
   all platforms.
 - Fix rte_thread_detach() function description,
   return value and remove empty line.
 - Reimplement mutex functions. Add compatible representation for mutex
   identifier. Add macro to replace static mutex initialization instances.
 - Fix commit messages (lines too long, remove unicode symbols)

v9:
- Sign patches

v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value

v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.

v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()

v5:
- update cover letter with more details on the priority argument

v4:
- fix function description
- rebase

v3:
- rebase

v2:
- revert changes that break ABI 
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c


Narcisa Vasile (10):
  eal: add basic threading functions
  eal: add thread attributes
  eal/windows: translate Windows errors to errno-style errors
  eal: implement functions for thread affinity management
  eal: implement thread priority management functions
  eal: add thread lifetime management
  eal: implement functions for mutex management
  eal: implement functions for thread barrier management
  eal: add EAL argument for setting thread priority
  Add unit tests for thread API

 app/test/meson.build                |   2 +
 app/test/test_threads.c             | 419 ++++++++++++++++++++
 lib/eal/common/eal_common_options.c |  28 +-
 lib/eal/common/eal_internal_cfg.h   |   2 +
 lib/eal/common/eal_options.h        |   2 +
 lib/eal/common/meson.build          |   1 +
 lib/eal/common/rte_thread.c         | 445 +++++++++++++++++++++
 lib/eal/include/rte_thread.h        | 406 ++++++++++++++++++-
 lib/eal/unix/meson.build            |   1 -
 lib/eal/unix/rte_thread.c           |  92 -----
 lib/eal/version.map                 |  20 +
 lib/eal/windows/eal_lcore.c         | 176 ++++++---
 lib/eal/windows/eal_windows.h       |  10 +
 lib/eal/windows/include/sched.h     |   2 +-
 lib/eal/windows/rte_thread.c        | 588 ++++++++++++++++++++++++++--
 15 files changed, 2020 insertions(+), 174 deletions(-)
 create mode 100644 app/test/test_threads.c
 create mode 100644 lib/eal/common/rte_thread.c
 delete mode 100644 lib/eal/unix/rte_thread.c

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] doc: announce cryptodev-PMD interface as internal
  2021-08-03 11:44  3% [dpdk-dev] [PATCH] doc: announce cryptodev-PMD interface as internal Akhil Goyal
@ 2021-08-03 19:25  0% ` Ajit Khaparde
  2021-08-04  6:44  0%   ` Matan Azrad
  0 siblings, 1 reply; 200+ results
From: Ajit Khaparde @ 2021-08-03 19:25 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: dpdk-dev, anoobj, Radu Nicolau, Doherty, Declan, Hemant Agrawal,
	Matan Azrad, Ananyev, Konstantin, Thomas Monjalon, Zhang,
	Roy Fan, Somalapuram Amaranath, Ruifeng Wang, Pablo de Lara,
	Fiona Trahe, adwivedi, michaelsh, rnagadheeraj, Jay Zhou

[-- Attachment #1: Type: text/plain, Size: 1184 bytes --]

On Tue, Aug 3, 2021 at 4:45 AM Akhil Goyal <gakhil@marvell.com> wrote:
>
> The APIs which are internal to PMD and cryptodev library
> can be marked as internal so that ABI checking do not
> shout for changes in APIs which are internal to DPDK.
>
> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

> ---
>  doc/guides/rel_notes/deprecation.rst | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index 6a35c7649a..f81bd87f10 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -148,6 +148,9 @@ Deprecation Notices
>    content. On Linux and FreeBSD, supported prior to DPDK 20.11,
>    original structure will be kept until DPDK 21.11.
>
> +* cryptodev: The APIs for interfacing between library and PMD will be marked
> +  as internal APIs in DPDK 21.11.
> +
>  * security: The functions ``rte_security_set_pkt_metadata`` and
>    ``rte_security_get_userdata`` will be made inline functions and additional
>    flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
> --
> 2.25.1
>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2] doc: announce changes to eventdev library
  2021-08-03  4:12  3%   ` Jerin Jacob
  2021-08-03  8:32  0%     ` Mattias Rönnblom
@ 2021-08-04  5:57  0%     ` Jayatheerthan, Jay
  2021-08-04  6:06  0%     ` Gujjar, Abhinandan S
  2021-08-05 14:22  0%     ` Thomas Monjalon
  3 siblings, 0 replies; 200+ results
From: Jayatheerthan, Jay @ 2021-08-04  5:57 UTC (permalink / raw)
  To: Jerin Jacob, Pavan Nikhilesh, Gujjar, Abhinandan S, Carrillo,
	Erik G, Van Haaren, Harry, Hemant Agrawal, McDaniel, Timothy,
	Liang Ma
  Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, mattias.ronnblom, Thomas Monjalon

> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Tuesday, August 3, 2021 9:43 AM
> To: Pavan Nikhilesh <pbhagavatula@marvell.com>; Gujjar, Abhinandan S <abhinandan.gujjar@intel.com>; Carrillo, Erik G
> <erik.g.carrillo@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>; Hemant Agrawal <hemant.agrawal@nxp.com>;
> McDaniel, Timothy <timothy.mcdaniel@intel.com>; Liang Ma <liang.j.ma@intel.com>; Jayatheerthan, Jay
> <jay.jayatheerthan@intel.com>
> Cc: Jerin Jacob <jerinj@marvell.com>; Ray Kinsella <mdr@ashroe.eu>; dpdk-dev <dev@dpdk.org>; mattias.ronnblom
> <mattias.ronnblom@ericsson.com>; Thomas Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v2] doc: announce changes to eventdev library
> 
> On Tue, Aug 3, 2021 at 2:46 AM <pbhagavatula@marvell.com> wrote:
> >
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > Make driver layer as internal, remove unnecessary rte_ prefix for
> > structures and functions that are not a part of public API.
> > Promote experimental trace and vector APIs to stable.
> > Add reserved field to `rte_event_timer` structure.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> 
> 
> ++ Eventdev driver Maintainers.
> 
> This list is based on items identified for 21.11 ABI improvement at
> https://docs.google.com/spreadsheets/d/1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE/edit#gid=0
> 
> 
> > ---
> >  v2 Changes:
> >  - Fix build issues.
> >
> >  doc/guides/rel_notes/deprecation.rst | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> > index d9c0e65921..6ac321eb1e 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -158,3 +158,14 @@ Deprecation Notices
> >  * security: The functions ``rte_security_set_pkt_metadata`` and
> >    ``rte_security_get_userdata`` will be made inline functions and additional
> >    flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
> > +
> > +* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
> > +  to make the driver interface as internal and the structures ``rte_eventdev_data``,
> > +  ``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file named
> > +  ``rte_eventdev_core.h`` in DPDK 21.11.
> > +  The ``rte_`` prefix for internal structures and functions will be removed across the
> > +  library.
> > +  The experimental eventdev trace APIs and ``rte_event_vector_pool_create``,
> > +  ``rte_event_eth_rx_adapter_vector_limits_get`` will be promoted to stable.
> > +  An 8byte reserved field will be added to the structure ``rte_event_timer`` to
> > +  support future extensions.
> > --
> > 2.17.1
> >

Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2] doc: announce changes to eventdev library
  2021-08-03  4:12  3%   ` Jerin Jacob
  2021-08-03  8:32  0%     ` Mattias Rönnblom
  2021-08-04  5:57  0%     ` Jayatheerthan, Jay
@ 2021-08-04  6:06  0%     ` Gujjar, Abhinandan S
  2021-08-05 14:22  0%     ` Thomas Monjalon
  3 siblings, 0 replies; 200+ results
From: Gujjar, Abhinandan S @ 2021-08-04  6:06 UTC (permalink / raw)
  To: Jerin Jacob, Pavan Nikhilesh, Carrillo, Erik G, Van Haaren,
	Harry, Hemant Agrawal, McDaniel, Timothy, Liang Ma,
	Jayatheerthan, Jay
  Cc: Jerin Jacob, Ray Kinsella, dpdk-dev, mattias.ronnblom, Thomas Monjalon



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Tuesday, August 3, 2021 9:43 AM
> To: Pavan Nikhilesh <pbhagavatula@marvell.com>; Gujjar, Abhinandan S
> <abhinandan.gujjar@intel.com>; Carrillo, Erik G <erik.g.carrillo@intel.com>;
> Van Haaren, Harry <harry.van.haaren@intel.com>; Hemant Agrawal
> <hemant.agrawal@nxp.com>; McDaniel, Timothy
> <timothy.mcdaniel@intel.com>; Liang Ma <liang.j.ma@intel.com>;
> Jayatheerthan, Jay <jay.jayatheerthan@intel.com>
> Cc: Jerin Jacob <jerinj@marvell.com>; Ray Kinsella <mdr@ashroe.eu>; dpdk-
> dev <dev@dpdk.org>; mattias.ronnblom
> <mattias.ronnblom@ericsson.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [PATCH v2] doc: announce changes to eventdev
> library
> 
> On Tue, Aug 3, 2021 at 2:46 AM <pbhagavatula@marvell.com> wrote:
> >
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > Make driver layer as internal, remove unnecessary rte_ prefix for
> > structures and functions that are not a part of public API.
> > Promote experimental trace and vector APIs to stable.
> > Add reserved field to `rte_event_timer` structure.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> 
> 
> ++ Eventdev driver Maintainers.
> 
> This list is based on items identified for 21.11 ABI improvement at
> https://docs.google.com/spreadsheets/d/1betlC000ua5SsSiJIcC54mCCCJnW
> 6voH5Dqv9UxeyfE/edit#gid=0
> 
> 
> > ---
> >  v2 Changes:
> >  - Fix build issues.
> >
> >  doc/guides/rel_notes/deprecation.rst | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/deprecation.rst
> > b/doc/guides/rel_notes/deprecation.rst
> > index d9c0e65921..6ac321eb1e 100644
> > --- a/doc/guides/rel_notes/deprecation.rst
> > +++ b/doc/guides/rel_notes/deprecation.rst
> > @@ -158,3 +158,14 @@ Deprecation Notices
> >  * security: The functions ``rte_security_set_pkt_metadata`` and
> >    ``rte_security_get_userdata`` will be made inline functions and additional
> >    flags will be added in structure ``rte_security_ctx`` in DPDK 21.11.
> > +
> > +* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to
> > +``eventdev_driver.h``
> > +  to make the driver interface as internal and the structures
> > +``rte_eventdev_data``,
> > +  ``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file
> > +named
> > +  ``rte_eventdev_core.h`` in DPDK 21.11.
> > +  The ``rte_`` prefix for internal structures and functions will be
> > +removed across the
> > +  library.
> > +  The experimental eventdev trace APIs and
> > +``rte_event_vector_pool_create``,
> > +  ``rte_event_eth_rx_adapter_vector_limits_get`` will be promoted to
> stable.
> > +  An 8byte reserved field will be added to the structure
> > +``rte_event_timer`` to
> > +  support future extensions.
> > --
> > 2.17.1
> >
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce cryptodev-PMD interface as internal
  2021-08-03 19:25  0% ` Ajit Khaparde
@ 2021-08-04  6:44  0%   ` Matan Azrad
  2021-08-04  8:44  0%     ` Hemant Agrawal
  0 siblings, 1 reply; 200+ results
From: Matan Azrad @ 2021-08-04  6:44 UTC (permalink / raw)
  To: Ajit Khaparde, Akhil Goyal
  Cc: dpdk-dev, anoobj, Radu Nicolau, Doherty, Declan, Hemant Agrawal,
	Ananyev, Konstantin, NBU-Contact-Thomas Monjalon, Zhang, Roy Fan,
	Somalapuram Amaranath, Ruifeng Wang, Pablo de Lara, Fiona Trahe,
	adwivedi, michaelsh, rnagadheeraj, Jay Zhou



From: Ajit Khaparde
> > The APIs which are internal to PMD and cryptodev library
> > can be marked as internal so that ABI checking do not
> > shout for changes in APIs which are internal to DPDK.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Matan Azrad <matan@nvidia.com>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce cryptodev-PMD interface as internal
  2021-08-04  6:44  0%   ` Matan Azrad
@ 2021-08-04  8:44  0%     ` Hemant Agrawal
  2021-08-04 14:35  0%       ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Hemant Agrawal @ 2021-08-04  8:44 UTC (permalink / raw)
  To: Matan Azrad, Ajit Khaparde, Akhil Goyal
  Cc: dpdk-dev, anoobj, Radu Nicolau, Doherty, Declan, Ananyev,
	Konstantin, NBU-Contact-Thomas Monjalon, Zhang, Roy Fan,
	Somalapuram Amaranath, Ruifeng Wang, Pablo de Lara, Fiona Trahe,
	adwivedi, michaelsh, rnagadheeraj, Jay Zhou

> 
> From: Ajit Khaparde
> > > The APIs which are internal to PMD and cryptodev library can be
> > > marked as internal so that ABI checking do not shout for changes in
> > > APIs which are internal to DPDK.
> > >
> > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v5] doc: policy on the promotion of experimental APIs
      2021-08-03 16:44 23% ` [dpdk-dev] [PATCH v4] " Ray Kinsella
@ 2021-08-04  9:34 23% ` Ray Kinsella
  2021-08-04 10:39  3%   ` Thomas Monjalon
  2 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2021-08-04  9:34 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, john.mcnamara, roretzla, ferruh.yigit, thomas,
	david.marchand, stephen, jerinjacobk, Ray Kinsella

Clarifying the ABI policy on the promotion of experimental APIS to stable.
We have a fair number of APIs that have been experimental for more than
2 years. This policy amendment indicates that these APIs should be
promoted or removed, or should at least form a conservation between the
maintainer and original contributor.

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
v2: comments on abi expiry from Tyler Retzlaff.
v3: typos in the git commit message
v4: typos and comments by Jerin Jacob
v5: typos caught by the CI

 doc/guides/contributing/abi_policy.rst | 25 ++++++++++++++++++++++---
 1 file changed, 22 insertions(+), 3 deletions(-)

diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
index 4ad87dbfed..520763b63a 100644
--- a/doc/guides/contributing/abi_policy.rst
+++ b/doc/guides/contributing/abi_policy.rst
@@ -26,9 +26,10 @@ General Guidelines
    symbols is managed with :ref:`ABI Versioning <abi_versioning>`.
 #. The removal of symbols is considered an :ref:`ABI breakage <abi_breakages>`,
    once approved these will form part of the next ABI version.
-#. Libraries or APIs marked as :ref:`experimental <experimental_apis>` may
-   be changed or removed without prior notice, as they are not considered part
-   of an ABI version.
+#. Libraries or APIs marked as :ref:`experimental <experimental_apis>` may be
+   changed or removed without prior notice, as they are not considered part of
+   an ABI version. The :ref:`experimental <experimental_apis>` status of an API
+   is not an indefinite state.
 #. Updates to the :ref:`minimum hardware requirements <hw_rqmts>`, which drop
    support for hardware which was previously supported, should be treated as an
    ABI change.
@@ -358,3 +359,21 @@ Libraries
 Libraries marked as ``experimental`` are entirely not considered part of an ABI
 version.
 All functions in such libraries may be changed or removed without prior notice.
+
+Promotion to stable
+~~~~~~~~~~~~~~~~~~~
+
+An API's ``experimental`` status should be reviewed annually, by both the
+maintainer and/or the original contributor. Ordinarily APIs marked as
+``experimental`` will be promoted to the stable ABI once a maintainer has become
+satisfied that the API is mature and is unlikely to change.
+
+In exceptional circumstances, should an API still be classified as
+``experimental`` after two years and is without any prospect of becoming part of
+the stable API. The API will then become a candidate for removal, to avoid the
+accumulation of abandoned symbols.
+
+Should an API's Binary Interface change, usually due to a direct change to the
+API's signature, it is reasonable for the review and expiry clocks to reset. The
+promotion or removal of symbols will typically form part of a conversation
+between the maintainer and the original contributor.
-- 
2.26.2


^ permalink raw reply	[relevance 23%]

* Re: [dpdk-dev] [PATCH v5] doc: policy on the promotion of experimental APIs
  2021-08-04  9:34 23% ` [dpdk-dev] [PATCH v5] " Ray Kinsella
@ 2021-08-04 10:39  3%   ` Thomas Monjalon
  2021-08-04 11:49  0%     ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-08-04 10:39 UTC (permalink / raw)
  To: Ray Kinsella
  Cc: dev, bruce.richardson, john.mcnamara, roretzla, ferruh.yigit,
	david.marchand, stephen, jerinjacobk

04/08/2021 11:34, Ray Kinsella:
> Clarifying the ABI policy on the promotion of experimental APIS to stable.
> We have a fair number of APIs that have been experimental for more than
> 2 years. This policy amendment indicates that these APIs should be
> promoted or removed, or should at least form a conservation between the

s/conservation/conversation/

> maintainer and original contributor.
> 
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
> ---
> +#. Libraries or APIs marked as :ref:`experimental <experimental_apis>` may be
> +   changed or removed without prior notice, as they are not considered part of
> +   an ABI version. The :ref:`experimental <experimental_apis>` status of an API
> +   is not an indefinite state.
[...]
> +Promotion to stable
> +~~~~~~~~~~~~~~~~~~~
> +
> +An API's ``experimental`` status should be reviewed annually, by both the
> +maintainer and/or the original contributor. Ordinarily APIs marked as
> +``experimental`` will be promoted to the stable ABI once a maintainer has become
> +satisfied that the API is mature and is unlikely to change.
> +
> +In exceptional circumstances, should an API still be classified as
> +``experimental`` after two years and is without any prospect of becoming part of
> +the stable API. The API will then become a candidate for removal, to avoid the
> +accumulation of abandoned symbols.
> +
> +Should an API's Binary Interface change, usually due to a direct change to the

API's Binary Interface?
I assume you mean ABI.

> +API's signature, it is reasonable for the review and expiry clocks to reset. The
> +promotion or removal of symbols will typically form part of a conversation
> +between the maintainer and the original contributor.

Acked-by: Thomas Monjalon <thomas@monjalon.net>

Applied with above changes, thanks.



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v5] doc: policy on the promotion of experimental APIs
  2021-08-04 10:39  3%   ` Thomas Monjalon
@ 2021-08-04 11:49  0%     ` Kinsella, Ray
  0 siblings, 0 replies; 200+ results
From: Kinsella, Ray @ 2021-08-04 11:49 UTC (permalink / raw)
  To: Thomas Monjalon
  Cc: dev, bruce.richardson, john.mcnamara, roretzla, ferruh.yigit,
	david.marchand, stephen, jerinjacobk



On 04/08/2021 11:39, Thomas Monjalon wrote:
> 04/08/2021 11:34, Ray Kinsella:
>> Clarifying the ABI policy on the promotion of experimental APIS to stable.
>> We have a fair number of APIs that have been experimental for more than
>> 2 years. This policy amendment indicates that these APIs should be
>> promoted or removed, or should at least form a conservation between the
> 
> s/conservation/conversation/
> 
>> maintainer and original contributor.
>>
>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
>> ---
>> +#. Libraries or APIs marked as :ref:`experimental <experimental_apis>` may be
>> +   changed or removed without prior notice, as they are not considered part of
>> +   an ABI version. The :ref:`experimental <experimental_apis>` status of an API
>> +   is not an indefinite state.
> [...]
>> +Promotion to stable
>> +~~~~~~~~~~~~~~~~~~~
>> +
>> +An API's ``experimental`` status should be reviewed annually, by both the
>> +maintainer and/or the original contributor. Ordinarily APIs marked as
>> +``experimental`` will be promoted to the stable ABI once a maintainer has become
>> +satisfied that the API is mature and is unlikely to change.
>> +
>> +In exceptional circumstances, should an API still be classified as
>> +``experimental`` after two years and is without any prospect of becoming part of
>> +the stable API. The API will then become a candidate for removal, to avoid the
>> +accumulation of abandoned symbols.
>> +
>> +Should an API's Binary Interface change, usually due to a direct change to the
> 
> API's Binary Interface?
> I assume you mean ABI.
> 
>> +API's signature, it is reasonable for the review and expiry clocks to reset. The
>> +promotion or removal of symbols will typically form part of a conversation
>> +between the maintainer and the original contributor.
> 
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> 
> Applied with above changes, thanks.
> 

Thanks.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
       [not found]             ` <DM4PR12MB53736410D2C07101F872363EA1F19@DM4PR12MB5373.namprd12.prod.outlook.com>
@ 2021-08-04 12:14  3%           ` Kinsella, Ray
  2021-08-04 13:00  3%             ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-08-04 12:14 UTC (permalink / raw)
  To: Xueming(Steven) Li, dpdk-dev



On 04/08/2021 13:11, Xueming(Steven) Li wrote:
> 
> 
>> -----Original Message-----
>> From: Kinsella, Ray <mdr@ashroe.eu>
>> Sent: Wednesday, August 4, 2021 7:46 PM
>> To: Xueming(Steven) Li <xuemingl@nvidia.com>
>> Subject: Re: [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
>>
>>
>>
>> On 04/08/2021 12:21, Xueming(Steven) Li wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Kinsella, Ray <mdr@ashroe.eu>
>>>> Sent: Wednesday, August 4, 2021 6:00 PM
>>>> To: Xueming(Steven) Li <xuemingl@nvidia.com>
>>>> Cc: dev@dpdk.org; Wang Haiyue <haiyue.wang@intel.com>;
>>>> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Neil Horman
>>>> <nhorman@tuxdriver.com>
>>>> Subject: Re: [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
>>>>
>>>>
>>>>
>>>> On 25/06/2021 12:47, Xueming Li wrote:
>>>>> Auxiliary bus [1] provides a way to split function into
>>>>> child-devices representing sub-domains of functionality. Each
>>>>> auxiliary device represents a part of its parent functionality.
>>>>>
>>>>> Auxiliary device is identified by unique device name, sysfs path:
>>>>>   /sys/bus/auxiliary/devices/<name>
>>>>>
>>>>> Devargs legacy syntax ofauxiliary device:
>>>>>   -a auxiliary:<name>[,args...]
>>>>> Devargs generic syntax of auxiliary device:
>>>>>   -a bus=auxiliary,name=<name>,,/class=<classs>,,/driver=<driver>,,
>>>>>
>>>>> [1] kernel auxiliary bus document:
>>>>> https://www.kernel.org/doc/html/latest/driver-api/auxiliary_bus.html
>>>>>
>>>>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
>>>>> Cc: Wang Haiyue <haiyue.wang@intel.com>
>>>>> Cc: Thomas Monjalon <thomas@monjalon.net>
>>>>> Cc: Kinsella Ray <mdr@ashroe.eu>
>>>>> ---
>>>>>  MAINTAINERS                               |   5 +
>>>>>  doc/guides/rel_notes/release_21_08.rst    |   6 +
>>>>>  drivers/bus/auxiliary/auxiliary_common.c  | 411
>>>>> ++++++++++++++++++++++  drivers/bus/auxiliary/auxiliary_params.c  |
>>>>> ++++++++++++++++++++++ 59 ++++
>>>>>  drivers/bus/auxiliary/linux/auxiliary.c   | 141 ++++++++
>>>>>  drivers/bus/auxiliary/meson.build         |  16 +
>>>>>  drivers/bus/auxiliary/private.h           |  74 ++++
>>>>>  drivers/bus/auxiliary/rte_bus_auxiliary.h | 201 +++++++++++
>>>>>  drivers/bus/auxiliary/version.map         |   7 +
>>>>>  drivers/bus/meson.build                   |   1 +
>>>>>  10 files changed, 921 insertions(+)  create mode 100644
>>>>> drivers/bus/auxiliary/auxiliary_common.c
>>>>>  create mode 100644 drivers/bus/auxiliary/auxiliary_params.c
>>>>>  create mode 100644 drivers/bus/auxiliary/linux/auxiliary.c
>>>>>  create mode 100644 drivers/bus/auxiliary/meson.build  create mode
>>>>> 100644 drivers/bus/auxiliary/private.h  create mode 100644
>>>>> drivers/bus/auxiliary/rte_bus_auxiliary.h
>>>>>  create mode 100644 drivers/bus/auxiliary/version.map
>>>>>
>>>>
>>>> Acked-by: Ray Kinsella <mdr@ashroe.eu>
>>>
>>> Thanks, but this patch already integrated :)
>>
>> It appears in the order in which I am going through my email is incorrect. :-)
>>
>>>
>>> Would you like to have a look at another deprecation notice? Andrew reviewed RFC:
>>> https://mails.dpdk.org/archives/dev/2021-August/216007.html
>>>
>>
>> Its not strictly a depreciation notice though, you are not breaking anything right.
>> Since you are not breaking anything, don't think the notice is required in the 21.11 timeframe.
>>
>> Now if you where doing it in 21.08, it would be an ABI change and that would be a different story.
> 
> Thanks for looking at this!
> Yes, it targets to 21.11. The offloading flag is fine, but the shared_group does break ABI, detail:
> 	https://mails.dpdk.org/archives/dev/2021-July/215575.html

Right ... its a new field, not a depreciation as such.
What I mean by this is that no existing code is broken.

21.11 is a new ABI in any case and you are not depreciating anything, so no notice is required. 

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
  2021-08-04 12:14  3%           ` Kinsella, Ray
@ 2021-08-04 13:00  3%             ` Xueming(Steven) Li
  2021-08-04 13:12  5%               ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-08-04 13:00 UTC (permalink / raw)
  To: Kinsella, Ray, dpdk-dev



> -----Original Message-----
> From: Kinsella, Ray <mdr@ashroe.eu>
> Sent: Wednesday, August 4, 2021 8:14 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>; dpdk-dev <dev@dpdk.org>
> Subject: Re: [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
> 
> 
> 
> On 04/08/2021 13:11, Xueming(Steven) Li wrote:
> >
> >
> >> -----Original Message-----
> >> From: Kinsella, Ray <mdr@ashroe.eu>
> >> Sent: Wednesday, August 4, 2021 7:46 PM
> >> To: Xueming(Steven) Li <xuemingl@nvidia.com>
> >> Subject: Re: [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
> >>
> >>
> >>
> >> On 04/08/2021 12:21, Xueming(Steven) Li wrote:
> >>>
> >>>
> >>>> -----Original Message-----
> >>>> From: Kinsella, Ray <mdr@ashroe.eu>
> >>>> Sent: Wednesday, August 4, 2021 6:00 PM
> >>>> To: Xueming(Steven) Li <xuemingl@nvidia.com>
> >>>> Cc: dev@dpdk.org; Wang Haiyue <haiyue.wang@intel.com>;
> >>>> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Neil Horman
> >>>> <nhorman@tuxdriver.com>
> >>>> Subject: Re: [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
> >>>>
> >>>>
> >>>>
> >>>> On 25/06/2021 12:47, Xueming Li wrote:
> >>>>> Auxiliary bus [1] provides a way to split function into
> >>>>> child-devices representing sub-domains of functionality. Each
> >>>>> auxiliary device represents a part of its parent functionality.
> >>>>>
> >>>>> Auxiliary device is identified by unique device name, sysfs path:
> >>>>>   /sys/bus/auxiliary/devices/<name>
> >>>>>
> >>>>> Devargs legacy syntax ofauxiliary device:
> >>>>>   -a auxiliary:<name>[,args...]
> >>>>> Devargs generic syntax of auxiliary device:
> >>>>>   -a
> >>>>> bus=auxiliary,name=<name>,,/class=<classs>,,/driver=<driver>,,
> >>>>>
> >>>>> [1] kernel auxiliary bus document:
> >>>>> https://www.kernel.org/doc/html/latest/driver-api/auxiliary_bus.ht
> >>>>> ml
> >>>>>
> >>>>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> >>>>> Cc: Wang Haiyue <haiyue.wang@intel.com>
> >>>>> Cc: Thomas Monjalon <thomas@monjalon.net>
> >>>>> Cc: Kinsella Ray <mdr@ashroe.eu>
> >>>>> ---
> >>>>>  MAINTAINERS                               |   5 +
> >>>>>  doc/guides/rel_notes/release_21_08.rst    |   6 +
> >>>>>  drivers/bus/auxiliary/auxiliary_common.c  | 411
> >>>>> ++++++++++++++++++++++  drivers/bus/auxiliary/auxiliary_params.c
> >>>>> ++++++++++++++++++++++ |
> >>>>> ++++++++++++++++++++++ 59 ++++
> >>>>>  drivers/bus/auxiliary/linux/auxiliary.c   | 141 ++++++++
> >>>>>  drivers/bus/auxiliary/meson.build         |  16 +
> >>>>>  drivers/bus/auxiliary/private.h           |  74 ++++
> >>>>>  drivers/bus/auxiliary/rte_bus_auxiliary.h | 201 +++++++++++
> >>>>>  drivers/bus/auxiliary/version.map         |   7 +
> >>>>>  drivers/bus/meson.build                   |   1 +
> >>>>>  10 files changed, 921 insertions(+)  create mode 100644
> >>>>> drivers/bus/auxiliary/auxiliary_common.c
> >>>>>  create mode 100644 drivers/bus/auxiliary/auxiliary_params.c
> >>>>>  create mode 100644 drivers/bus/auxiliary/linux/auxiliary.c
> >>>>>  create mode 100644 drivers/bus/auxiliary/meson.build  create mode
> >>>>> 100644 drivers/bus/auxiliary/private.h  create mode 100644
> >>>>> drivers/bus/auxiliary/rte_bus_auxiliary.h
> >>>>>  create mode 100644 drivers/bus/auxiliary/version.map
> >>>>>
> >>>>
> >>>> Acked-by: Ray Kinsella <mdr@ashroe.eu>
> >>>
> >>> Thanks, but this patch already integrated :)
> >>
> >> It appears in the order in which I am going through my email is
> >> incorrect. :-)
> >>
> >>>
> >>> Would you like to have a look at another deprecation notice? Andrew reviewed RFC:
> >>> https://mails.dpdk.org/archives/dev/2021-August/216007.html
> >>>
> >>
> >> Its not strictly a depreciation notice though, you are not breaking anything right.
> >> Since you are not breaking anything, don't think the notice is required in the 21.11 timeframe.
> >>
> >> Now if you where doing it in 21.08, it would be an ABI change and that would be a different story.
> >
> > Thanks for looking at this!
> > Yes, it targets to 21.11. The offloading flag is fine, but the shared_group does break ABI, detail:
> > 	https://mails.dpdk.org/archives/dev/2021-July/215575.html
> 
> Right ... its a new field, not a depreciation as such.
> What I mean by this is that no existing code is broken.
> 
> 21.11 is a new ABI in any case and you are not depreciating anything, so no notice is required.

Maybe it a new process, confirmed with Thomas, it's expected:
https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-changes

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
  2021-08-04 13:00  3%             ` Xueming(Steven) Li
@ 2021-08-04 13:12  5%               ` Thomas Monjalon
  2021-08-04 13:53  0%                 ` Kinsella, Ray
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-08-04 13:12 UTC (permalink / raw)
  To: Kinsella, Ray, Xueming(Steven) Li; +Cc: dpdk-dev, techboard

04/08/2021 15:00, Xueming(Steven) Li:
> From: Kinsella, Ray <mdr@ashroe.eu>
> > On 04/08/2021 13:11, Xueming(Steven) Li wrote:
> > > From: Kinsella, Ray <mdr@ashroe.eu>
> > >> Its not strictly a depreciation notice though, you are not breaking anything right.
> > >> Since you are not breaking anything, don't think the notice is required in the 21.11 timeframe.
> > >>
> > >> Now if you where doing it in 21.08, it would be an ABI change and that would be a different story.
> > >
> > > Thanks for looking at this!
> > > Yes, it targets to 21.11. The offloading flag is fine, but the shared_group does break ABI, detail:
> > > 	https://mails.dpdk.org/archives/dev/2021-July/215575.html
> > 
> > Right ... its a new field, not a depreciation as such.
> > What I mean by this is that no existing code is broken.
> > 
> > 21.11 is a new ABI in any case and you are not depreciating anything, so no notice is required.
> 
> Maybe it a new process, confirmed with Thomas, it's expected:
> https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-changes

I think what Ray means is that it breaks ABI but not API,
so he doesn't consider a notice is required.
My understanding of the policy is that *any* ABI change requires a notice.
But if you want to make it lighter and allow any non-announced ABI change
in an ABI-breaking release, I think I would vote for.

Cc techboard@dpdk.org



^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
  2021-08-04 13:12  5%               ` Thomas Monjalon
@ 2021-08-04 13:53  0%                 ` Kinsella, Ray
  2021-08-04 14:13  4%                   ` Thomas Monjalon
  0 siblings, 1 reply; 200+ results
From: Kinsella, Ray @ 2021-08-04 13:53 UTC (permalink / raw)
  To: Thomas Monjalon, Xueming(Steven) Li; +Cc: dpdk-dev, techboard



On 04/08/2021 14:12, Thomas Monjalon wrote:
> 04/08/2021 15:00, Xueming(Steven) Li:
>> From: Kinsella, Ray <mdr@ashroe.eu>
>>> On 04/08/2021 13:11, Xueming(Steven) Li wrote:
>>>> From: Kinsella, Ray <mdr@ashroe.eu>
>>>>> Its not strictly a depreciation notice though, you are not breaking anything right.
>>>>> Since you are not breaking anything, don't think the notice is required in the 21.11 timeframe.
>>>>>
>>>>> Now if you where doing it in 21.08, it would be an ABI change and that would be a different story.
>>>>
>>>> Thanks for looking at this!
>>>> Yes, it targets to 21.11. The offloading flag is fine, but the shared_group does break ABI, detail:
>>>> 	https://mails.dpdk.org/archives/dev/2021-July/215575.html
>>>
>>> Right ... its a new field, not a depreciation as such.
>>> What I mean by this is that no existing code is broken.
>>>
>>> 21.11 is a new ABI in any case and you are not depreciating anything, so no notice is required.
>>
>> Maybe it a new process, confirmed with Thomas, it's expected:
>> https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-changes
> 
> I think what Ray means is that it breaks ABI but not API,
> so he doesn't consider a notice is required.

> My understanding of the policy is that *any* ABI change requires a notice.
> But if you want to make it lighter and allow any non-announced ABI change
> in an ABI-breaking release, I think I would vote for.

Thanks for clarifying Thomas ... you are correct. 

> 
> Cc techboard@dpdk.org
> 
 

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v6 2/2] bus/auxiliary: introduce auxiliary bus
  2021-08-04 13:53  0%                 ` Kinsella, Ray
@ 2021-08-04 14:13  4%                   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-04 14:13 UTC (permalink / raw)
  To: Xueming(Steven) Li, Kinsella, Ray; +Cc: dpdk-dev, techboard

04/08/2021 15:53, Kinsella, Ray:
> On 04/08/2021 14:12, Thomas Monjalon wrote:
> > 04/08/2021 15:00, Xueming(Steven) Li:
> >> From: Kinsella, Ray <mdr@ashroe.eu>
> >>> On 04/08/2021 13:11, Xueming(Steven) Li wrote:
> >>>> From: Kinsella, Ray <mdr@ashroe.eu>
> >>>>> Its not strictly a depreciation notice though, you are not breaking anything right.
> >>>>> Since you are not breaking anything, don't think the notice is required in the 21.11 timeframe.
> >>>>>
> >>>>> Now if you where doing it in 21.08, it would be an ABI change and that would be a different story.
> >>>>
> >>>> Thanks for looking at this!
> >>>> Yes, it targets to 21.11. The offloading flag is fine, but the shared_group does break ABI, detail:
> >>>> 	https://mails.dpdk.org/archives/dev/2021-July/215575.html
> >>>
> >>> Right ... its a new field, not a depreciation as such.
> >>> What I mean by this is that no existing code is broken.
> >>>
> >>> 21.11 is a new ABI in any case and you are not depreciating anything, so no notice is required.
> >>
> >> Maybe it a new process, confirmed with Thomas, it's expected:
> >> https://doc.dpdk.org/guides/contributing/abi_policy.html#abi-changes
> > 
> > I think what Ray means is that it breaks ABI but not API,
> > so he doesn't consider a notice is required.
> 
> > My understanding of the policy is that *any* ABI change requires a notice.
> > But if you want to make it lighter and allow any non-announced ABI change
> > in an ABI-breaking release, I think I would vote for.
> 
> Thanks for clarifying Thomas ... you are correct.

In the meantime, let's review and ack notices, even if ABI-only change:
https://patches.dpdk.org/bundle/tmonjalo/deprecation-notices/

We'll discuss later if we can accept more ABI change,
but we should try to be on the safe side for those already announced.



^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal
  2021-08-03  4:05  0%   ` Jerin Jacob
@ 2021-08-04 14:22  0%     ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-04 14:22 UTC (permalink / raw)
  To: Harman Kalra
  Cc: Xia, Chenbo, dev, jerinj, david.marchand, Ray Kinsella, Jerin Jacob

> > > Moving struct rte_intr_handle as an internal structure to
> > > avoid any ABI breakages in future. Since this structure defines
> > > some static arrays and changing respective macros breaks the ABI.
> > > Eg:
> > > Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
> > > MSI-X interrupts that can be defined for a PCI device, while PCI
> > > specification allows maximum 2048 MSI-X interrupts that can be used.
> > > If some PCI device requires more than 512 vectors, either change the
> > > RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
> > > PCI device MSI-X size on probe time. Either way its an ABI breakage.
> > >
> > > Discussion thread:
> > > https://mails.dpdk.org/archives/dev/2021-March/202959.html
> > >
> > > Change already included in 21.11 ABI improvement spreadsheet (item 42):
> > > https://docs.google.com/spreadsheets/d/1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9U
> > > xeyfE/edit#gid=0
> > >
> > > Signed-off-by: Harman Kalra <hkalra@marvell.com>
> > > ---
> > > --- a/doc/guides/rel_notes/deprecation.rst
> > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > +* eal: Making ``struct rte_intr_handle`` internal to avoid any ABI breakages
> > > +  in future.
> > > +
> >
> > Acked-by: Chenbo Xia <chenbo.xia@intel.com>
> 
> Acked-by: Jerin Jacob <jerinj@marvell.com>

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

Applied, thanks.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce cryptodev-PMD interface as internal
  2021-08-04  8:44  0%     ` Hemant Agrawal
@ 2021-08-04 14:35  0%       ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-04 14:35 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: Matan Azrad, Ajit Khaparde, dev, anoobj, Radu Nicolau, Doherty,
	Declan, Ananyev, Konstantin, Zhang, Roy Fan,
	Somalapuram Amaranath, Ruifeng Wang, Pablo de Lara, Fiona Trahe,
	adwivedi, michaelsh, rnagadheeraj, Jay Zhou, Hemant Agrawal

> > > > The APIs which are internal to PMD and cryptodev library can be
> > > > marked as internal so that ABI checking do not shout for changes in
> > > > APIs which are internal to DPDK.
> > > >
> > > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> > Acked-by: Matan Azrad <matan@nvidia.com>
> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>

Applied, thanks.



^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v6] devtools: script to track map symbols
  @ 2021-08-04 16:23  5% ` Ray Kinsella
  2021-08-04 16:27  5% ` [dpdk-dev] [PATCH v7] " Ray Kinsella
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-04 16:23 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr

This script tracks the growth of stable and experimental symbols
over releases since v19.11. The script has the ability to
count the added symbols between two dpdk releases, and to
list experimental symbols present in two dpdk releases
(expired symbols).

example usages:

Count symbols added since v19.11
$ devtools/symbol_tool.py count-symbols

Count symbols added since v20.11
$ devtools/symbol_tool.py count-symbols --releases v20.11,v21.05

List experimental symbols present in v20.11 and v21.05
$ devtools/symbol_tool.py list-expired --releases v20.11,v21.05

List experimental symbols in libraries only, present since v19.11
$ devtools/symbol_tool.py list-expired --directory lib

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
v2: reworked to fix pylint errors
v3: sent with the correct in-reply-to
v4: fix typos picked up by the CI
v5: fix terminal_size & directory args
v6: added list-expired, to list expired experimental symbols

 devtools/symbol_tool.py | 377 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 377 insertions(+)
 create mode 100755 devtools/symbol_tool.py

diff --git a/devtools/symbol_tool.py b/devtools/symbol_tool.py
new file mode 100755
index 0000000000..63969a131b
--- /dev/null
+++ b/devtools/symbol_tool.py
@@ -0,0 +1,377 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+'''Tool to count or list symbols in each DPDK release'''
+from pathlib import Path
+import sys
+import os
+import subprocess
+import argparse
+import re
+import datetime
+try:
+    from parsley import makeGrammar
+except ImportError:
+    print('This script uses the package Parsley to parse C Mapfiles.\n'
+          'This can be installed with \"pip install parsley".')
+    sys.exit()
+
+MAP_GRAMMAR = r"""
+
+ws = (' ' | '\r' | '\n' | '\t')*
+
+ABI_VER = ({})
+DPDK_VER = ('DPDK_' ABI_VER)
+ABI_NAME = ('INTERNAL' | 'EXPERIMENTAL' | DPDK_VER)
+comment = '#' (~'\n' anything)+ '\n'
+symbol = (~(';' | '}}' | '#') anything )+:c ';' -> ''.join(c)
+global = 'global:'
+local = 'local: *;'
+symbols = comment* symbol:s ws comment* -> s
+
+abi = (abi_section+):m -> dict(m)
+abi_section = (ws ABI_NAME:e ws '{{' ws global* (~local ws symbols)*:s ws local* ws '}}' ws DPDK_VER* ';' ws) -> (e,s)
+"""
+
+def get_abi_versions():
+    '''Returns a string of possible dpdk abi versions'''
+
+    year = datetime.date.today().year - 2000
+    tags = " |".join(['\'{}\''.format(i) \
+                     for i in reversed(range(21, year + 1)) ])
+    tags  = tags + ' | \'20.0.1\' | \'20.0\' | \'20\''
+
+    return tags
+
+def get_dpdk_releases():
+    '''Returns a list of dpdk release tags names  since v19.11'''
+
+    year = datetime.date.today().year - 2000
+    year_range = "|".join("{}".format(i) for i in range(19,year + 1))
+    pattern = re.compile(r'^\"v(' +  year_range + r')\.\d{2}\"$')
+
+    cmd = ['git', 'for-each-ref', '--sort=taggerdate', '--format', '"%(tag)"']
+    try:
+        result = subprocess.run(cmd, \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        print("Failed to interogate git for release tags")
+        sys.exit()
+
+
+    tags = result.stdout.decode('utf-8').split('\n')
+
+    # find the non-rcs between now and v19.11
+    tags = [ tag.replace('\"','') \
+             for tag in reversed(tags) \
+             if pattern.match(tag) ][:-3]
+
+    return tags
+
+def fix_directory_name(path):
+    '''Prepend librte to the source directory name'''
+    mapfilepath1 = str(path.parent.name)
+    mapfilepath2 = str(path.parents[1])
+    mapfilepath = mapfilepath2 + '/librte_' + mapfilepath1
+
+    return mapfilepath
+
+def directory_renamed(path, rel):
+    '''Fix removal of the librte_ from the directory names'''
+
+    mapfilepath = fix_directory_name(path)
+    tagfile = '{}:{}/{}'.format(rel, mapfilepath,  path.name)
+
+    try:
+        result = subprocess.run(['git', 'show', tagfile], \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        result = None
+
+    return result
+
+def mapfile_renamed(path, rel):
+    '''Fix renaming of the map file'''
+    newfile = None
+
+    result = subprocess.run(['git', 'ls-tree', \
+                             rel, str(path.parent) + '/'], \
+                            stdout=subprocess.PIPE, \
+                            stderr=subprocess.PIPE,
+                            check=True)
+    dentries = result.stdout.decode('utf-8')
+    dentries = dentries.split('\n')
+
+    # filter entries looking for the map file
+    dentries = [dentry for dentry in dentries if dentry.endswith('.map')]
+    if len(dentries) > 1 or len(dentries) == 0:
+        return None
+
+    dparts = dentries[0].split('/')
+    newfile = dparts[len(dparts) - 1]
+
+    if newfile is not None:
+        tagfile = '{}:{}/{}'.format(rel, path.parent, newfile)
+
+        try:
+            result = subprocess.run(['git', 'show', tagfile], \
+                                    stdout=subprocess.PIPE, \
+                                    stderr=subprocess.PIPE,
+                                    check=True)
+        except subprocess.CalledProcessError:
+            result = None
+
+    else:
+        result = None
+
+    return result
+
+def mapfile_and_directory_renamed(path, rel):
+    '''Fix renaming of the map file & the source directory'''
+    mapfilepath = Path("{}/{}".format(fix_directory_name(path),path.name))
+
+    return mapfile_renamed(mapfilepath, rel)
+
+FIX_STRATEGIES = [directory_renamed, \
+                  mapfile_renamed, \
+                  mapfile_and_directory_renamed]
+
+def get_symbols(map_parser, release, mapfile_path):
+    '''Count the symbols for a given release and mapfile'''
+    abi_sections = {}
+
+    tagfile = '{}:{}'.format(release,mapfile_path)
+    try:
+        result = subprocess.run(['git', 'show', tagfile], \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        result = None
+
+    for fix_strategy in FIX_STRATEGIES:
+        if result is not None:
+            break
+        result = fix_strategy(mapfile_path, release)
+
+    if result is not None:
+        mapfile = result.stdout.decode('utf-8')
+        abi_sections = map_parser(mapfile).abi()
+
+    return abi_sections
+
+def get_terminal_rows():
+    '''Find the number of rows in the terminal'''
+
+    try:
+        return os.get_terminal_size().lines
+    except IOError:
+        return 0
+
+class SymbolCountOutput():
+    '''Format the output to supported formats'''
+    output_fmt = ""
+    column_fmt = ""
+
+    def __init__(self, format_output, dpdk_releases):
+        self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+        self.column_titles = ['mapfile'] +  dpdk_releases
+
+        self.terminal_rows = get_terminal_rows()
+        self.row = 0
+
+    def set_terminal_output(self,dpdk_rel):
+        '''Set the output format to Tabbed Separated Values'''
+
+        self.output_fmt = '{:<50}' + \
+            ''.join(['{:<6}{:<6}'] * (len(dpdk_rel)))
+        self.column_fmt = '{:50}' + \
+            ''.join(['{:<12}'] * (len(dpdk_rel)))
+
+    def set_csv_output(self,dpdk_rel):
+        '''Set the output format to Comma Separated Values'''
+
+        self.output_fmt = '{},' + \
+            ','.join(['{},{}'] * (len(dpdk_rel)))
+        self.column_fmt = '{},' + \
+            ','.join(['{},'] * (len(dpdk_rel)))
+
+    def print_columns(self):
+        '''Print column rows with release names'''
+        print(self.column_fmt.format(*self.column_titles))
+        self.row += 1
+
+    def print_row(self, mapfile, symbols):
+        '''Print row of symbol values'''
+        print(self.output_fmt.format(*([mapfile] + symbols)))
+        self.row += 1
+
+        if((self.terminal_rows>0) and ((self.row % self.terminal_rows) == 0)):
+            self.print_columns()
+
+    OUTPUT_FORMATS = { None: set_terminal_output, \
+                   'terminal': set_terminal_output, \
+                   'csv': set_csv_output }
+
+class ListExpiredOutput():
+    '''Format the output to supported formats'''
+    output_fmt = ""
+    column_fmt = ""
+
+    def __init__(self, format_output, dpdk_releases):
+        self.terminal = True
+        self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+        self.column_titles = ['mapfile'] +  \
+            ['expired (' + ','.join(dpdk_releases) + ')']
+
+    def set_terminal_output(self, _):
+        '''Set the output format to Tabbed Separated Values'''
+
+        self.output_fmt = '{:<50}{:<50}'
+        self.column_fmt = '{:50}{:50}'
+
+    def set_csv_output(self, _):
+        '''Set the output format to Comma Separated Values'''
+
+        self.output_fmt = '{},{}'
+        self.column_fmt = '{},{}'
+        self.terminal = False
+
+    def print_columns(self):
+        '''Print column rows with release names'''
+        print(self.column_fmt.format(*self.column_titles))
+
+    def print_row(self, mapfile, symbols):
+        '''Print row of symbol values'''
+
+        for symbol in symbols:
+            print(self.output_fmt.format(mapfile,symbol))
+            if self.terminal :
+                mapfile = ''
+
+    OUTPUT_FORMATS = { None: set_terminal_output, \
+                   'terminal': set_terminal_output, \
+                   'csv': set_csv_output }
+
+class CountSymbolsAction:
+    ''' Logic to count symbols added since a give release '''
+    IGNORE_SECTIONS = ['EXPERIMENTAL','INTERNAL']
+
+    def __init__(self, mapfile_path, mapfile_parser, format_output):
+        self.path = mapfile_path
+        self.parser = mapfile_parser
+        self.format_output = format_output
+        self.symbols_count = []
+
+    def add_mapfile(self, release):
+        ''' add a version mapfile '''
+        symbol_count = experimental_count = 0
+
+        symbols = get_symbols(self.parser, release, self.path)
+
+        # which versions are present, and we care about
+        abi_vers = [abi_ver \
+                    for abi_ver in symbols \
+                    if abi_ver not in self.IGNORE_SECTIONS]
+
+        for abi_ver in abi_vers:
+            symbol_count += len(symbols[abi_ver])
+
+        # count experimental symbols
+        if 'EXPERIMENTAL' in symbols.keys():
+            experimental_count = len(symbols['EXPERIMENTAL'])
+
+        self.symbols_count += [symbol_count, experimental_count]
+
+    def __del__(self):
+        self.format_output.print_row(self.path.parent.name, self.symbols_count)
+
+class ListExpiredAction:
+    ''' Logic to list expired symbols between two releases '''
+
+    def __init__(self, mapfile_path, mapfile_parser, format_output):
+        self.path = mapfile_path
+        self.parser = mapfile_parser
+        self.format_output = format_output
+        self.experimental_symbols = []
+
+    def add_mapfile(self, release):
+        ''' add a version mapfile '''
+        symbols = get_symbols(self.parser, release, self.path)
+        if 'EXPERIMENTAL' in symbols.keys():
+            self.experimental_symbols.append(symbols['EXPERIMENTAL'])
+
+    def __del__(self):
+        if len(self.experimental_symbols) != 2:
+            return
+
+        tmp = self.experimental_symbols
+        # find symbols present in both dpdk releases
+        intersect_syms = [sym for sym in tmp[0] if sym in tmp[1]]
+
+        # check for empty set
+        if intersect_syms == []:
+            return
+
+        self.format_output.print_row(self.path.parent.name, intersect_syms)
+
+SRC_DIRECTORIES = 'drivers,lib'
+
+ACTIONS = {None: CountSymbolsAction, \
+           'count-symbols': CountSymbolsAction, \
+           'list-expired': ListExpiredAction}
+
+ACTION_OUTPUT = {None: SymbolCountOutput, \
+                 'count-symbols': SymbolCountOutput, \
+                 'list-expired': ListExpiredOutput}
+
+def main():
+    '''Main entry point'''
+
+    dpdk_releases = get_dpdk_releases()
+
+    parser = argparse.ArgumentParser(description='Count symbols in DPDK Libs')
+    parser.add_argument('mode', choices=['count-symbols','list-expired'])
+    parser.add_argument('--format-output', choices=['terminal','csv'], \
+                        default='terminal')
+    parser.add_argument('--directory', choices=SRC_DIRECTORIES.split(','),
+                        default=SRC_DIRECTORIES)
+    parser.add_argument('--releases', \
+                        help='2 x comma seperated release tags e.g. \'' \
+                        + ','.join([dpdk_releases[0],dpdk_releases[-1]]) \
+                        + '\'')
+    args = parser.parse_args()
+
+    if args.releases is not None:
+        dpdk_releases = args.releases.split(',')
+
+    if args.mode == 'list-expired':
+        if len(dpdk_releases) < 2:
+            sys.exit('Please specify two releases to compare ' \
+                     'in \'list-expired\' mode.')
+        dpdk_releases = [dpdk_releases[0], dpdk_releases[len(dpdk_releases) - 1]]
+
+    action = ACTIONS[args.mode]
+    format_output = ACTION_OUTPUT[args.mode](args.format_output, dpdk_releases)
+
+    map_grammar = MAP_GRAMMAR.format(get_abi_versions())
+    map_parser = makeGrammar(map_grammar, {})
+
+    format_output.print_columns()
+
+    for src_dir in args.directory.split(','):
+        for path in Path(src_dir).rglob('*.map'):
+            release_action = action(path, map_parser, format_output)
+
+            for release in dpdk_releases:
+                release_action.add_mapfile(release)
+
+            # all the magic happens in the destructor
+            del release_action
+
+if __name__ == '__main__':
+    main()
-- 
2.26.2


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v7] devtools: script to track map symbols
    2021-08-04 16:23  5% ` [dpdk-dev] [PATCH v6] " Ray Kinsella
@ 2021-08-04 16:27  5% ` Ray Kinsella
                     ` (2 subsequent siblings)
  4 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-04 16:27 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr

This script tracks the growth of stable and experimental symbols
over releases since v19.11. The script has the ability to
count the added symbols between two dpdk releases, and to
list experimental symbols present in two dpdk releases
(expired symbols).

example usages:

Count symbols added since v19.11
$ devtools/symbol_tool.py count-symbols

Count symbols added since v20.11
$ devtools/symbol_tool.py count-symbols --releases v20.11,v21.05

List experimental symbols present in v20.11 and v21.05
$ devtools/symbol_tool.py list-expired --releases v20.11,v21.05

List experimental symbols in libraries only, present since v19.11
$ devtools/symbol_tool.py list-expired --directory lib

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
v2: reworked to fix pylint errors
v3: sent with the correct in-reply-to
v4: fix typos picked up by the CI
v5: fix terminal_size & directory args
v6: added list-expired, to list expired experimental symbols
v7: fix typo in comments

 devtools/symbol_tool.py | 377 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 377 insertions(+)
 create mode 100755 devtools/symbol_tool.py

diff --git a/devtools/symbol_tool.py b/devtools/symbol_tool.py
new file mode 100755
index 0000000000..f2a2d43a15
--- /dev/null
+++ b/devtools/symbol_tool.py
@@ -0,0 +1,377 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+'''Tool to count or list symbols in each DPDK release'''
+from pathlib import Path
+import sys
+import os
+import subprocess
+import argparse
+import re
+import datetime
+try:
+    from parsley import makeGrammar
+except ImportError:
+    print('This script uses the package Parsley to parse C Mapfiles.\n'
+          'This can be installed with \"pip install parsley".')
+    sys.exit()
+
+MAP_GRAMMAR = r"""
+
+ws = (' ' | '\r' | '\n' | '\t')*
+
+ABI_VER = ({})
+DPDK_VER = ('DPDK_' ABI_VER)
+ABI_NAME = ('INTERNAL' | 'EXPERIMENTAL' | DPDK_VER)
+comment = '#' (~'\n' anything)+ '\n'
+symbol = (~(';' | '}}' | '#') anything )+:c ';' -> ''.join(c)
+global = 'global:'
+local = 'local: *;'
+symbols = comment* symbol:s ws comment* -> s
+
+abi = (abi_section+):m -> dict(m)
+abi_section = (ws ABI_NAME:e ws '{{' ws global* (~local ws symbols)*:s ws local* ws '}}' ws DPDK_VER* ';' ws) -> (e,s)
+"""
+
+def get_abi_versions():
+    '''Returns a string of possible dpdk abi versions'''
+
+    year = datetime.date.today().year - 2000
+    tags = " |".join(['\'{}\''.format(i) \
+                     for i in reversed(range(21, year + 1)) ])
+    tags  = tags + ' | \'20.0.1\' | \'20.0\' | \'20\''
+
+    return tags
+
+def get_dpdk_releases():
+    '''Returns a list of dpdk release tags names  since v19.11'''
+
+    year = datetime.date.today().year - 2000
+    year_range = "|".join("{}".format(i) for i in range(19,year + 1))
+    pattern = re.compile(r'^\"v(' +  year_range + r')\.\d{2}\"$')
+
+    cmd = ['git', 'for-each-ref', '--sort=taggerdate', '--format', '"%(tag)"']
+    try:
+        result = subprocess.run(cmd, \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        print("Failed to interogate git for release tags")
+        sys.exit()
+
+
+    tags = result.stdout.decode('utf-8').split('\n')
+
+    # find the non-rcs between now and v19.11
+    tags = [ tag.replace('\"','') \
+             for tag in reversed(tags) \
+             if pattern.match(tag) ][:-3]
+
+    return tags
+
+def fix_directory_name(path):
+    '''Prepend librte to the source directory name'''
+    mapfilepath1 = str(path.parent.name)
+    mapfilepath2 = str(path.parents[1])
+    mapfilepath = mapfilepath2 + '/librte_' + mapfilepath1
+
+    return mapfilepath
+
+def directory_renamed(path, rel):
+    '''Fix removal of the librte_ from the directory names'''
+
+    mapfilepath = fix_directory_name(path)
+    tagfile = '{}:{}/{}'.format(rel, mapfilepath,  path.name)
+
+    try:
+        result = subprocess.run(['git', 'show', tagfile], \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        result = None
+
+    return result
+
+def mapfile_renamed(path, rel):
+    '''Fix renaming of the map file'''
+    newfile = None
+
+    result = subprocess.run(['git', 'ls-tree', \
+                             rel, str(path.parent) + '/'], \
+                            stdout=subprocess.PIPE, \
+                            stderr=subprocess.PIPE,
+                            check=True)
+    dentries = result.stdout.decode('utf-8')
+    dentries = dentries.split('\n')
+
+    # filter entries looking for the map file
+    dentries = [dentry for dentry in dentries if dentry.endswith('.map')]
+    if len(dentries) > 1 or len(dentries) == 0:
+        return None
+
+    dparts = dentries[0].split('/')
+    newfile = dparts[len(dparts) - 1]
+
+    if newfile is not None:
+        tagfile = '{}:{}/{}'.format(rel, path.parent, newfile)
+
+        try:
+            result = subprocess.run(['git', 'show', tagfile], \
+                                    stdout=subprocess.PIPE, \
+                                    stderr=subprocess.PIPE,
+                                    check=True)
+        except subprocess.CalledProcessError:
+            result = None
+
+    else:
+        result = None
+
+    return result
+
+def mapfile_and_directory_renamed(path, rel):
+    '''Fix renaming of the map file & the source directory'''
+    mapfilepath = Path("{}/{}".format(fix_directory_name(path),path.name))
+
+    return mapfile_renamed(mapfilepath, rel)
+
+FIX_STRATEGIES = [directory_renamed, \
+                  mapfile_renamed, \
+                  mapfile_and_directory_renamed]
+
+def get_symbols(map_parser, release, mapfile_path):
+    '''Count the symbols for a given release and mapfile'''
+    abi_sections = {}
+
+    tagfile = '{}:{}'.format(release,mapfile_path)
+    try:
+        result = subprocess.run(['git', 'show', tagfile], \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        result = None
+
+    for fix_strategy in FIX_STRATEGIES:
+        if result is not None:
+            break
+        result = fix_strategy(mapfile_path, release)
+
+    if result is not None:
+        mapfile = result.stdout.decode('utf-8')
+        abi_sections = map_parser(mapfile).abi()
+
+    return abi_sections
+
+def get_terminal_rows():
+    '''Find the number of rows in the terminal'''
+
+    try:
+        return os.get_terminal_size().lines
+    except IOError:
+        return 0
+
+class SymbolCountOutput():
+    '''Format the output to supported formats'''
+    output_fmt = ""
+    column_fmt = ""
+
+    def __init__(self, format_output, dpdk_releases):
+        self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+        self.column_titles = ['mapfile'] +  dpdk_releases
+
+        self.terminal_rows = get_terminal_rows()
+        self.row = 0
+
+    def set_terminal_output(self,dpdk_rel):
+        '''Set the output format to Tabbed Separated Values'''
+
+        self.output_fmt = '{:<50}' + \
+            ''.join(['{:<6}{:<6}'] * (len(dpdk_rel)))
+        self.column_fmt = '{:50}' + \
+            ''.join(['{:<12}'] * (len(dpdk_rel)))
+
+    def set_csv_output(self,dpdk_rel):
+        '''Set the output format to Comma Separated Values'''
+
+        self.output_fmt = '{},' + \
+            ','.join(['{},{}'] * (len(dpdk_rel)))
+        self.column_fmt = '{},' + \
+            ','.join(['{},'] * (len(dpdk_rel)))
+
+    def print_columns(self):
+        '''Print column rows with release names'''
+        print(self.column_fmt.format(*self.column_titles))
+        self.row += 1
+
+    def print_row(self, mapfile, symbols):
+        '''Print row of symbol values'''
+        print(self.output_fmt.format(*([mapfile] + symbols)))
+        self.row += 1
+
+        if((self.terminal_rows>0) and ((self.row % self.terminal_rows) == 0)):
+            self.print_columns()
+
+    OUTPUT_FORMATS = { None: set_terminal_output, \
+                   'terminal': set_terminal_output, \
+                   'csv': set_csv_output }
+
+class ListExpiredOutput():
+    '''Format the output to supported formats'''
+    output_fmt = ""
+    column_fmt = ""
+
+    def __init__(self, format_output, dpdk_releases):
+        self.terminal = True
+        self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+        self.column_titles = ['mapfile'] +  \
+            ['expired (' + ','.join(dpdk_releases) + ')']
+
+    def set_terminal_output(self, _):
+        '''Set the output format to Tabbed Separated Values'''
+
+        self.output_fmt = '{:<50}{:<50}'
+        self.column_fmt = '{:50}{:50}'
+
+    def set_csv_output(self, _):
+        '''Set the output format to Comma Separated Values'''
+
+        self.output_fmt = '{},{}'
+        self.column_fmt = '{},{}'
+        self.terminal = False
+
+    def print_columns(self):
+        '''Print column rows with release names'''
+        print(self.column_fmt.format(*self.column_titles))
+
+    def print_row(self, mapfile, symbols):
+        '''Print row of symbol values'''
+
+        for symbol in symbols:
+            print(self.output_fmt.format(mapfile,symbol))
+            if self.terminal :
+                mapfile = ''
+
+    OUTPUT_FORMATS = { None: set_terminal_output, \
+                   'terminal': set_terminal_output, \
+                   'csv': set_csv_output }
+
+class CountSymbolsAction:
+    ''' Logic to count symbols added since a give release '''
+    IGNORE_SECTIONS = ['EXPERIMENTAL','INTERNAL']
+
+    def __init__(self, mapfile_path, mapfile_parser, format_output):
+        self.path = mapfile_path
+        self.parser = mapfile_parser
+        self.format_output = format_output
+        self.symbols_count = []
+
+    def add_mapfile(self, release):
+        ''' add a version mapfile '''
+        symbol_count = experimental_count = 0
+
+        symbols = get_symbols(self.parser, release, self.path)
+
+        # which versions are present, and we care about
+        abi_vers = [abi_ver \
+                    for abi_ver in symbols \
+                    if abi_ver not in self.IGNORE_SECTIONS]
+
+        for abi_ver in abi_vers:
+            symbol_count += len(symbols[abi_ver])
+
+        # count experimental symbols
+        if 'EXPERIMENTAL' in symbols.keys():
+            experimental_count = len(symbols['EXPERIMENTAL'])
+
+        self.symbols_count += [symbol_count, experimental_count]
+
+    def __del__(self):
+        self.format_output.print_row(self.path.parent.name, self.symbols_count)
+
+class ListExpiredAction:
+    ''' Logic to list expired symbols between two releases '''
+
+    def __init__(self, mapfile_path, mapfile_parser, format_output):
+        self.path = mapfile_path
+        self.parser = mapfile_parser
+        self.format_output = format_output
+        self.experimental_symbols = []
+
+    def add_mapfile(self, release):
+        ''' add a version mapfile '''
+        symbols = get_symbols(self.parser, release, self.path)
+        if 'EXPERIMENTAL' in symbols.keys():
+            self.experimental_symbols.append(symbols['EXPERIMENTAL'])
+
+    def __del__(self):
+        if len(self.experimental_symbols) != 2:
+            return
+
+        tmp = self.experimental_symbols
+        # find symbols present in both dpdk releases
+        intersect_syms = [sym for sym in tmp[0] if sym in tmp[1]]
+
+        # check for empty set
+        if intersect_syms == []:
+            return
+
+        self.format_output.print_row(self.path.parent.name, intersect_syms)
+
+SRC_DIRECTORIES = 'drivers,lib'
+
+ACTIONS = {None: CountSymbolsAction, \
+           'count-symbols': CountSymbolsAction, \
+           'list-expired': ListExpiredAction}
+
+ACTION_OUTPUT = {None: SymbolCountOutput, \
+                 'count-symbols': SymbolCountOutput, \
+                 'list-expired': ListExpiredOutput}
+
+def main():
+    '''Main entry point'''
+
+    dpdk_releases = get_dpdk_releases()
+
+    parser = argparse.ArgumentParser(description='Count symbols in DPDK Libs')
+    parser.add_argument('mode', choices=['count-symbols','list-expired'])
+    parser.add_argument('--format-output', choices=['terminal','csv'], \
+                        default='terminal')
+    parser.add_argument('--directory', choices=SRC_DIRECTORIES.split(','),
+                        default=SRC_DIRECTORIES)
+    parser.add_argument('--releases', \
+                        help='2 x comma separated release tags e.g. \'' \
+                        + ','.join([dpdk_releases[0],dpdk_releases[-1]]) \
+                        + '\'')
+    args = parser.parse_args()
+
+    if args.releases is not None:
+        dpdk_releases = args.releases.split(',')
+
+    if args.mode == 'list-expired':
+        if len(dpdk_releases) < 2:
+            sys.exit('Please specify two releases to compare ' \
+                     'in \'list-expired\' mode.')
+        dpdk_releases = [dpdk_releases[0], dpdk_releases[len(dpdk_releases) - 1]]
+
+    action = ACTIONS[args.mode]
+    format_output = ACTION_OUTPUT[args.mode](args.format_output, dpdk_releases)
+
+    map_grammar = MAP_GRAMMAR.format(get_abi_versions())
+    map_parser = makeGrammar(map_grammar, {})
+
+    format_output.print_columns()
+
+    for src_dir in args.directory.split(','):
+        for path in Path(src_dir).rglob('*.map'):
+            release_action = action(path, map_parser, format_output)
+
+            for release in dpdk_releases:
+                release_action.add_mapfile(release)
+
+            # all the magic happens in the destructor
+            del release_action
+
+if __name__ == '__main__':
+    main()
-- 
2.26.2


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH v2] doc: announce changes to eventdev library
  2021-08-03  4:12  3%   ` Jerin Jacob
                       ` (2 preceding siblings ...)
  2021-08-04  6:06  0%     ` Gujjar, Abhinandan S
@ 2021-08-05 14:22  0%     ` Thomas Monjalon
  3 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-05 14:22 UTC (permalink / raw)
  To: Pavan Nikhilesh, Jerin Jacob
  Cc: Gujjar, Abhinandan S, Erik Gabriel Carrillo, Van Haaren, Harry,
	Hemant Agrawal, McDaniel, Timothy, Liang Ma, Jayatheerthan, Jay,
	dev, Ray Kinsella, Mattias Rönnblom, Jerin Jacob

03/08/2021 06:12, Jerin Jacob:
> On Tue, Aug 3, 2021 at 2:46 AM <pbhagavatula@marvell.com> wrote:
> >
> > From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> >
> > Make driver layer as internal, remove unnecessary rte_ prefix for
> > structures and functions that are not a part of public API.
> > Promote experimental trace and vector APIs to stable.
> > Add reserved field to `rte_event_timer` structure.
> >
> > Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> 
> 
> ++ Eventdev driver Maintainers.
> 
> This list is based on items identified for 21.11 ABI improvement at
> https://docs.google.com/spreadsheets/d/1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE/edit#gid=0

    Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
    Acked-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
    Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
    Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>


> > +* eventdev: The file ``rte_eventdev_pmd.h`` will be renamed to ``eventdev_driver.h``
> > +  to make the driver interface as internal and the structures ``rte_eventdev_data``,
> > +  ``rte_eventdev`` and ``rte_eventdevs`` will be moved to a new file named
> > +  ``rte_eventdev_core.h`` in DPDK 21.11.
> > +  The ``rte_`` prefix for internal structures and functions will be removed across the
> > +  library.

If a function is used outside of the library (in drivers),
it is better to keep rte_ prefix to avoid possible clash
with some driver dependencies.

> > +  The experimental eventdev trace APIs and ``rte_event_vector_pool_create``,
> > +  ``rte_event_eth_rx_adapter_vector_limits_get`` will be promoted to stable.
> > +  An 8byte reserved field will be added to the structure ``rte_event_timer`` to
> > +  support future extensions.

Applied, thanks.



^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2] doc: announce restructuring of crypto session structs
  @ 2021-08-05 15:03  3%         ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-08-05 15:03 UTC (permalink / raw)
  To: Zhang, Roy Fan, dev
  Cc: Anoob Joseph, Nicolau, Radu, Doherty, Declan, hemant.agrawal,
	matan, Ananyev, Konstantin, thomas, asomalap, ruifeng.wang,
	ajit.khaparde, De Lara Guarch, Pablo, Trahe, Fiona,
	Ankur Dwivedi, Michael Shamis, Nagadheeraj Rottela, jianjay.zhou

> Hi Akhil,
> 
> No problem. Glad to help. If you have code ready to share please let me
> know.
> 
I haven't started work on this yet. There are a few items in ABI improvements,
If you could pick some of them, it would be helpful.
I am currently working on PMD interface.
- Security and crypto session structs are next inline.
If you can spend some time, you could work on
rte_cryptodev and rte_cryptodev_data split and hide.

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v1] doc: update release notes for 21.08
@ 2021-08-05 21:57  7% John McNamara
  0 siblings, 0 replies; 200+ results
From: John McNamara @ 2021-08-05 21:57 UTC (permalink / raw)
  To: dev; +Cc: thomas, John McNamara

Fix grammar, spelling and formatting of DPDK 21.08 release notes.

Signed-off-by: John McNamara <john.mcnamara@intel.com>
---
 doc/guides/rel_notes/release_21_08.rst | 78 ++++++++------------------
 1 file changed, 24 insertions(+), 54 deletions(-)

diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst
index d7559ec6bf..0a7b817d9f 100644
--- a/doc/guides/rel_notes/release_21_08.rst
+++ b/doc/guides/rel_notes/release_21_08.rst
@@ -57,20 +57,20 @@ New Features
 
 * **Added auxiliary bus support.**
 
-  Auxiliary bus provides a way to split function into child-devices
+  An auxiliary bus provides a way to split a function into child-devices
   representing sub-domains of functionality. Each auxiliary device
   represents a part of its parent functionality.
 
 * **Added XZ compressed firmware support.**
 
-  Using ``rte_firmware_read``, a driver can now handle XZ compressed firmware
-  in a transparent way, with EAL uncompressing using libarchive if this library
+  Using ``rte_firmware_read`` a driver can now handle XZ compressed firmware
+  in a transparent way, with EAL uncompressing using libarchive, if this library
   is available when building DPDK.
 
 * **Updated Amazon ENA PMD.**
 
-  The new driver version (v2.4.0) introduced bug fixes and improvements,
-  including:
+  Updated the Amazon ENA PMD. The new driver version (v2.4.0) introduced bug
+  fixes and improvements, including:
 
   * Added Rx interrupt support.
   * RSS hash function key reconfiguration support.
@@ -78,20 +78,20 @@ New Features
 * **Updated Intel iavf driver.**
 
   * Added Tx QoS VF queue TC mapping.
-  * Added FDIR and RSS for GTPoGRE, support filter based on GTPU TEID/QFI,
-    outer most L3 or inner most l3/l4. 
+  * Added FDIR and RSS for GTPoGRE, and support for filters based on GTPU TEID/QFI,
+    outermost L3 or innermost L3/L4.
 
 * **Updated Intel ice driver.**
 
-  * In AVX2 code, added the new RX and TX paths to use the HW offload
+  * Added new RX and TX paths in the AVX2 code to use HW offload
     features. When the HW offload features are configured to be used, the
     offload paths are chosen automatically. In parallel the support for HW
     offload features was removed from the legacy AVX2 paths.
   * Added Tx QoS TC bandwidth configuration in DCF.
 
-* **Added support for Marvell CN10K SoC ethernet device.**
+* **Added support for Marvell CN10K SoC Ethernet device.**
 
-  * Added net/cnxk driver which provides the support for the integrated ethernet
+  * Added net/cnxk driver which provides the support for the integrated Ethernet
     device.
 
 * **Updated Mellanox mlx5 driver.**
@@ -100,44 +100,44 @@ New Features
   * Added support for meter hierarchy.
   * Added support for metering policy actions of yellow color.
   * Added support for metering trTCM RFC2698 and RFC4115.
-  * Added devargs options ``allow_duplicate_pattern``.
+  * Added devargs option ``allow_duplicate_pattern``.
   * Added matching on IPv4 Internet Header Length (IHL).
   * Added support for matching on VXLAN header last 8-bits reserved field.
   * Optimized multi-thread flow rule insertion rate.
 
 * **Added Wangxun ngbe PMD.**
 
-  Added a new PMD driver for Wangxun 1 Gigabit Ethernet NICs.
+  Added a new PMD driver for Wangxun 1Gb Ethernet NICs.
   See the :doc:`../nics/ngbe` for more details.
 
 * **Updated Solarflare network PMD.**
 
   Updated the Solarflare ``sfc_efx`` driver with changes including:
 
-  * Added COUNT action support for SN1000 NICs
+  * Added COUNT action support for SN1000 NICs.
 
 * **Added inflight packets clear API in vhost library.**
 
-  Added an API which can clear the inflight packets submitted to DMA
-  engine in vhost async data path.
+  Added an API which can clear the inflight packets submitted to the DMA
+  engine in the vhost async data path.
 
 * **Updated Intel QuickAssist crypto PMD.**
 
   Added fourth generation of QuickAssist Technology(QAT) devices support.
-  Only symmetric crypto has been currently enabled, compression and asymmetric
+  Only symmetric crypto has been currently enabled. Compression and asymmetric
   crypto PMD will fail to create.
 
 * **Added support for Marvell CNXK crypto driver.**
 
   * Added cnxk crypto PMD which provides support for an integrated
     crypto driver for CN9K and CN10K series of SOCs. Support for
-    symmetric crypto algorithms is added to both the PMDs.
+    symmetric crypto algorithms was added to both the PMDs.
   * Added support for lookaside protocol (IPsec) offload in cn10k PMD.
   * Added support for asymmetric crypto operations in cn9k and cn10k PMD.
 
 * **Updated Marvell OCTEON TX crypto PMD.**
 
-  Added support for crypto adapter OP_FORWARD mode.
+  Added support for crypto adapter ``OP_FORWARD`` mode.
 
 * **Added support for Nvidia crypto device driver.**
 
@@ -150,14 +150,14 @@ New Features
 
 * **Added Baseband PHY CNXK PMD.**
 
-  Added Baseband PHY PMD which allows to configure BPHY hardware block
+  Added Baseband PHY PMD which allows configuration of the BPHY hardware block
   comprising accelerators and DSPs specifically tailored for 5G/LTE inline
   use cases. Configuration happens via standard rawdev enq/deq operations. See
   the :doc:`../rawdevs/cnxk_bphy` rawdev guide for more details on this driver.
 
 * **Added support for Marvell CN10K, CN9K, event Rx/Tx adapter.**
 
-  * Added Rx/Tx adapter support for event/cnxk when the ethernet device requested
+  * Added Rx/Tx adapter support for event/cnxk when the Ethernet device requested
     is net/cnxk.
   * Added support for event vectorization for Rx/Tx adapter.
 
@@ -165,29 +165,15 @@ New Features
 
   Added support for cppc_cpufreq driver which works on most arm64 platforms.
 
-* **Added multi-queue support to Ethernet PMD Power Management**
+* **Added multi-queue support to Ethernet PMD Power Management.**
 
   The experimental PMD power management API now supports managing
   multiple Ethernet Rx queues per lcore.
 
-* **Updated testpmd to log errors to stderr.**
-
-  Updated testpmd application to log errors and warnings to stderr
-  instead of stdout used before.
-
-
-Removed Items
--------------
-
-.. This section should contain removed items in this release. Sample format:
-
-   * Add a short 1-2 sentence description of the removed item
-     in the past tense.
-
-   This section is a comment. Do not overwrite or remove it.
-   Also, make sure to start the actual text at the margin.
-   =======================================================
+* **Updated testpmd to output log errors to stderr.**
 
+  Updated testpmd application to output log errors and warnings to stderr
+  instead of stdout.
 
 API Changes
 -----------
@@ -236,22 +222,6 @@ ABI Changes
 
 * No ABI change that would break compatibility with 20.11.
 
-
-Known Issues
-------------
-
-.. This section should contain new known issues in this release. Sample format:
-
-   * **Add title in present tense with full stop.**
-
-     Add a short 1-2 sentence description of the known issue
-     in the present tense. Add information on any known workarounds.
-
-   This section is a comment. Do not overwrite or remove it.
-   Also, make sure to start the actual text at the margin.
-   =======================================================
-
-
 Tested Platforms
 ----------------
 
-- 
2.25.1


^ permalink raw reply	[relevance 7%]

* [dpdk-dev] [PATCH v8 1/2] devtools: script to track map symbols
  @ 2021-08-06 17:54  5%   ` Ray Kinsella
  2021-08-06 17:54  5%   ` [dpdk-dev] [PATCH v8 2/2] devtools: script to send notifications of expired symbols Ray Kinsella
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-06 17:54 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr

This script tracks the growth of stable and experimental symbols
over releases since v19.11. The script has the ability to
count the added symbols between two dpdk releases, and to
list experimental symbols present in two dpdk releases
(expired symbols).

example usages:

Count symbols added since v19.11
$ devtools/symbol-tool.py count-symbols

Count symbols added since v20.11
$ devtools/symbol-tool.py count-symbols --releases v20.11,v21.05

List experimental symbols present in v20.11 and v21.05
$ devtools/symbol-tool.py list-expired --releases v20.11,v21.05

List experimental symbols in libraries only, present since v19.11
$ devtools/symbol-tool.py list-expired --directory lib

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
 devtools/symbol-tool.py | 402 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 402 insertions(+)
 create mode 100755 devtools/symbol-tool.py

diff --git a/devtools/symbol-tool.py b/devtools/symbol-tool.py
new file mode 100755
index 0000000000..39727c9a32
--- /dev/null
+++ b/devtools/symbol-tool.py
@@ -0,0 +1,402 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+'''Tool to count or list symbols in each DPDK release'''
+from pathlib import Path
+import sys
+import os
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import re
+import datetime
+try:
+    from parsley import makeGrammar
+except ImportError:
+    print('This script uses the package Parsley to parse C Mapfiles.\n'
+          'This can be installed with \"pip install parsley".')
+    sys.exit()
+
+DESCRIPTION = '''
+This script tracks the growth of stable and experimental symbols
+over releases since v19.11. The script has the ability to
+count the added symbols between two dpdk releases, and to
+list experimental symbols present in two dpdk releases
+(expired symbols).
+
+example usages:
+
+Count symbols added since v19.11
+$ devtools/symbol-tool.py count-symbols
+
+Count symbols added since v20.11
+$ devtools/symbol-tool.py count-symbols --releases v20.11,v21.05
+
+List experimental symbols present in v20.11 and v21.05
+$ devtools/symbol-tool.py list-expired --releases v20.11,v21.05
+
+List experimental symbols in libraries only, present since v19.11
+$ devtools/symbol-tool.py list-expired --directory lib
+'''
+
+MAP_GRAMMAR = r"""
+
+ws = (' ' | '\r' | '\n' | '\t')*
+
+ABI_VER = ({})
+DPDK_VER = ('DPDK_' ABI_VER)
+ABI_NAME = ('INTERNAL' | 'EXPERIMENTAL' | DPDK_VER)
+comment = '#' (~'\n' anything)+ '\n'
+symbol = (~(';' | '}}' | '#') anything )+:c ';' -> ''.join(c)
+global = 'global:'
+local = 'local: *;'
+symbols = comment* symbol:s ws comment* -> s
+
+abi = (abi_section+):m -> dict(m)
+abi_section = (ws ABI_NAME:e ws '{{' ws global* (~local ws symbols)*:s ws local* ws '}}' ws DPDK_VER* ';' ws) -> (e,s)
+"""
+
+def get_abi_versions():
+    '''Returns a string of possible dpdk abi versions'''
+
+    year = datetime.date.today().year - 2000
+    tags = " |".join(['\'{}\''.format(i) \
+                     for i in reversed(range(21, year + 1)) ])
+    tags  = tags + ' | \'20.0.1\' | \'20.0\' | \'20\''
+
+    return tags
+
+def get_dpdk_releases():
+    '''Returns a list of dpdk release tags names  since v19.11'''
+
+    year = datetime.date.today().year - 2000
+    year_range = "|".join("{}".format(i) for i in range(19,year + 1))
+    pattern = re.compile(r'^\"v(' +  year_range + r')\.\d{2}\"$')
+
+    cmd = ['git', 'for-each-ref', '--sort=taggerdate', '--format', '"%(tag)"']
+    try:
+        result = subprocess.run(cmd, \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        print("Failed to interogate git for release tags")
+        sys.exit()
+
+
+    tags = result.stdout.decode('utf-8').split('\n')
+
+    # find the non-rcs between now and v19.11
+    tags = [ tag.replace('\"','') \
+             for tag in reversed(tags) \
+             if pattern.match(tag) ][:-3]
+
+    return tags
+
+def fix_directory_name(path):
+    '''Prepend librte to the source directory name'''
+    mapfilepath1 = str(path.parent.name)
+    mapfilepath2 = str(path.parents[1])
+    mapfilepath = mapfilepath2 + '/librte_' + mapfilepath1
+
+    return mapfilepath
+
+def directory_renamed(path, rel):
+    '''Fix removal of the librte_ from the directory names'''
+
+    mapfilepath = fix_directory_name(path)
+    tagfile = '{}:{}/{}'.format(rel, mapfilepath,  path.name)
+
+    try:
+        result = subprocess.run(['git', 'show', tagfile], \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        result = None
+
+    return result
+
+def mapfile_renamed(path, rel):
+    '''Fix renaming of the map file'''
+    newfile = None
+
+    result = subprocess.run(['git', 'ls-tree', \
+                             rel, str(path.parent) + '/'], \
+                            stdout=subprocess.PIPE, \
+                            stderr=subprocess.PIPE,
+                            check=True)
+    dentries = result.stdout.decode('utf-8')
+    dentries = dentries.split('\n')
+
+    # filter entries looking for the map file
+    dentries = [dentry for dentry in dentries if dentry.endswith('.map')]
+    if len(dentries) > 1 or len(dentries) == 0:
+        return None
+
+    dparts = dentries[0].split('/')
+    newfile = dparts[len(dparts) - 1]
+
+    if newfile is not None:
+        tagfile = '{}:{}/{}'.format(rel, path.parent, newfile)
+
+        try:
+            result = subprocess.run(['git', 'show', tagfile], \
+                                    stdout=subprocess.PIPE, \
+                                    stderr=subprocess.PIPE,
+                                    check=True)
+        except subprocess.CalledProcessError:
+            result = None
+
+    else:
+        result = None
+
+    return result
+
+def mapfile_and_directory_renamed(path, rel):
+    '''Fix renaming of the map file & the source directory'''
+    mapfilepath = Path("{}/{}".format(fix_directory_name(path),path.name))
+
+    return mapfile_renamed(mapfilepath, rel)
+
+FIX_STRATEGIES = [directory_renamed, \
+                  mapfile_renamed, \
+                  mapfile_and_directory_renamed]
+
+def get_symbols(map_parser, release, mapfile_path):
+    '''Count the symbols for a given release and mapfile'''
+    abi_sections = {}
+
+    tagfile = '{}:{}'.format(release,mapfile_path)
+    try:
+        result = subprocess.run(['git', 'show', tagfile], \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        result = None
+
+    for fix_strategy in FIX_STRATEGIES:
+        if result is not None:
+            break
+        result = fix_strategy(mapfile_path, release)
+
+    if result is not None:
+        mapfile = result.stdout.decode('utf-8')
+        abi_sections = map_parser(mapfile).abi()
+
+    return abi_sections
+
+def get_terminal_rows():
+    '''Find the number of rows in the terminal'''
+
+    try:
+        return os.get_terminal_size().lines
+    except IOError:
+        return 0
+
+class SymbolCountOutput():
+    '''Format the output to supported formats'''
+    output_fmt = ""
+    column_fmt = ""
+
+    def __init__(self, format_output, dpdk_releases):
+        self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+        self.column_titles = ['mapfile'] +  dpdk_releases
+
+        self.terminal_rows = get_terminal_rows()
+        self.row = 0
+
+    def set_terminal_output(self,dpdk_rel):
+        '''Set the output format to Tabbed Separated Values'''
+
+        self.output_fmt = '{:<50}' + \
+            ''.join(['{:<6}{:<6}'] * (len(dpdk_rel)))
+        self.column_fmt = '{:50}' + \
+            ''.join(['{:<12}'] * (len(dpdk_rel)))
+
+    def set_csv_output(self,dpdk_rel):
+        '''Set the output format to Comma Separated Values'''
+
+        self.output_fmt = '{},' + \
+            ','.join(['{},{}'] * (len(dpdk_rel)))
+        self.column_fmt = '{},' + \
+            ','.join(['{},'] * (len(dpdk_rel)))
+
+    def print_columns(self):
+        '''Print column rows with release names'''
+        print(self.column_fmt.format(*self.column_titles))
+        self.row += 1
+
+    def print_row(self, mapfile, symbols):
+        '''Print row of symbol values'''
+        print(self.output_fmt.format(*([mapfile] + symbols)))
+        self.row += 1
+
+        if((self.terminal_rows>0) and ((self.row % self.terminal_rows) == 0)):
+            self.print_columns()
+
+    OUTPUT_FORMATS = { None: set_terminal_output, \
+                   'terminal': set_terminal_output, \
+                   'csv': set_csv_output }
+
+class ListExpiredOutput():
+    '''Format the output to supported formats'''
+    output_fmt = ""
+    column_fmt = ""
+
+    def __init__(self, format_output, dpdk_releases):
+        self.terminal = True
+        self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+        self.column_titles = ['mapfile'] +  \
+            ['expired (' + ','.join(dpdk_releases) + ')']
+
+    def set_terminal_output(self, _):
+        '''Set the output format to Tabbed Separated Values'''
+
+        self.output_fmt = '{:<50}{:<50}'
+        self.column_fmt = '{:50}{:50}'
+
+    def set_csv_output(self, _):
+        '''Set the output format to Comma Separated Values'''
+
+        self.output_fmt = '{},{}'
+        self.column_fmt = '{},{}'
+        self.terminal = False
+
+    def print_columns(self):
+        '''Print column rows with release names'''
+        print(self.column_fmt.format(*self.column_titles))
+
+    def print_row(self, mapfile, symbols):
+        '''Print row of symbol values'''
+
+        for symbol in symbols:
+            print(self.output_fmt.format(mapfile,symbol))
+            if self.terminal :
+                mapfile = ''
+
+    OUTPUT_FORMATS = { None: set_terminal_output, \
+                   'terminal': set_terminal_output, \
+                   'csv': set_csv_output }
+
+class CountSymbolsAction:
+    ''' Logic to count symbols added since a give release '''
+    IGNORE_SECTIONS = ['EXPERIMENTAL','INTERNAL']
+
+    def __init__(self, mapfile_path, mapfile_parser, format_output):
+        self.path = mapfile_path
+        self.parser = mapfile_parser
+        self.format_output = format_output
+        self.symbols_count = []
+
+    def add_mapfile(self, release):
+        ''' add a version mapfile '''
+        symbol_count = experimental_count = 0
+
+        symbols = get_symbols(self.parser, release, self.path)
+
+        # which versions are present, and we care about
+        abi_vers = [abi_ver \
+                    for abi_ver in symbols \
+                    if abi_ver not in self.IGNORE_SECTIONS]
+
+        for abi_ver in abi_vers:
+            symbol_count += len(symbols[abi_ver])
+
+        # count experimental symbols
+        if 'EXPERIMENTAL' in symbols.keys():
+            experimental_count = len(symbols['EXPERIMENTAL'])
+
+        self.symbols_count += [symbol_count, experimental_count]
+
+    def __del__(self):
+        self.format_output.print_row(self.path.parent, self.symbols_count)
+
+class ListExpiredAction:
+    ''' Logic to list expired symbols between two releases '''
+
+    def __init__(self, mapfile_path, mapfile_parser, format_output):
+        self.path = mapfile_path
+        self.parser = mapfile_parser
+        self.format_output = format_output
+        self.experimental_symbols = []
+
+    def add_mapfile(self, release):
+        ''' add a version mapfile '''
+        symbols = get_symbols(self.parser, release, self.path)
+        if 'EXPERIMENTAL' in symbols.keys():
+            self.experimental_symbols.append(symbols['EXPERIMENTAL'])
+
+    def __del__(self):
+        if len(self.experimental_symbols) != 2:
+            return
+
+        tmp = self.experimental_symbols
+        # find symbols present in both dpdk releases
+        intersect_syms = [sym for sym in tmp[0] if sym in tmp[1]]
+
+        # check for empty set
+        if intersect_syms == []:
+            return
+
+        self.format_output.print_row(self.path.parent, intersect_syms)
+
+SRC_DIRECTORIES = 'drivers,lib'
+
+ACTIONS = {None: CountSymbolsAction, \
+           'count-symbols': CountSymbolsAction, \
+           'list-expired': ListExpiredAction}
+
+ACTION_OUTPUT = {None: SymbolCountOutput, \
+                 'count-symbols': SymbolCountOutput, \
+                 'list-expired': ListExpiredOutput}
+
+def main():
+    '''Main entry point'''
+
+    dpdk_releases = get_dpdk_releases()
+
+    parser = argparse.ArgumentParser(description=DESCRIPTION, \
+                                     formatter_class=RawTextHelpFormatter
+                                     )
+    parser.add_argument('mode', choices=['count-symbols','list-expired'])
+    parser.add_argument('--format-output', choices=['terminal','csv'], \
+                        default='terminal')
+    parser.add_argument('--directory', choices=SRC_DIRECTORIES.split(','),
+                        default=SRC_DIRECTORIES)
+    parser.add_argument('--releases', \
+                        help='2 x comma separated release tags e.g. \'' \
+                        + ','.join([dpdk_releases[0],dpdk_releases[-1]]) \
+                        + '\'')
+    args = parser.parse_args()
+
+    if args.releases is not None:
+        dpdk_releases = args.releases.split(',')
+
+    if args.mode == 'list-expired':
+        if len(dpdk_releases) < 2:
+            sys.exit('Please specify two releases to compare ' \
+                     'in \'list-expired\' mode.')
+        dpdk_releases = [dpdk_releases[0], dpdk_releases[len(dpdk_releases) - 1]]
+
+    action = ACTIONS[args.mode]
+    format_output = ACTION_OUTPUT[args.mode](args.format_output, dpdk_releases)
+
+    map_grammar = MAP_GRAMMAR.format(get_abi_versions())
+    map_parser = makeGrammar(map_grammar, {})
+
+    format_output.print_columns()
+
+    for src_dir in args.directory.split(','):
+        for path in Path(src_dir).rglob('*.map'):
+            release_action = action(path, map_parser, format_output)
+
+            for release in dpdk_releases:
+                release_action.add_mapfile(release)
+
+            # all the magic happens in the destructor
+            del release_action
+
+if __name__ == '__main__':
+    main()
-- 
2.26.2


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v8 2/2] devtools: script to send notifications of expired symbols
    2021-08-06 17:54  5%   ` [dpdk-dev] [PATCH v8 1/2] devtools: script to track map symbols Ray Kinsella
@ 2021-08-06 17:54  5%   ` Ray Kinsella
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-06 17:54 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr

Use this script with the output of the DPDK symbol tool, to notify
maintainers of expired symbols by email. You need to define the environment
variable DPDK_GETMAINTAINER_PATH, for this tool to work.

Use terminal output to review the emails before sending.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output terminal

Then use email output to send the emails to the maintainers.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
--smtp-server <server> --sender <someone@somewhere.com> \
--password <password>

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
 devtools/notify-symbol-maintainers.py | 224 ++++++++++++++++++++++++++
 1 file changed, 224 insertions(+)
 create mode 100755 devtools/notify-symbol-maintainers.py

diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
new file mode 100755
index 0000000000..447f88bb03
--- /dev/null
+++ b/devtools/notify-symbol-maintainers.py
@@ -0,0 +1,224 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+'''Tool to notify maintainers of expired symbols'''
+import smtplib
+import ssl
+import sys
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import time
+from email.message import EmailMessage
+
+DESCRIPTION = '''
+Use this script with the output of the DPDK symbol tool, to notify maintainers
+of expired symbols by email. You need to define the environment variable
+DPDK_GETMAINTAINER_PATH, for this tool to work.
+
+Use terminal output to review the emails before sending.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+devtools/notify_expired_symbols.py --format-output terminal
+
+Then use email output to send the emails to the maintainers.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+--smtp-server <server> --sender <someone@somewhere.com> --password <password>
+'''
+
+EMAIL_TEMPLATE = '''Hi there,
+
+Please note the symbols listed below have expired. In line with the DPDK ABI
+policy, they should be scheduled for removal, in the next DPDK release.
+
+For more information, please see the DPDK ABI Policy, section 3.5.3.
+https://doc.dpdk.org/guides/contributing/abi_policy.html
+
+Thanks,
+
+The DPDK Symbol Bot
+
+'''
+
+default_maintainers = ['Ray Kinsella <mdr@ashroe.eu>', \
+                       'Thomas Monjalon <thomas@monjalon.net>']
+get_maintainer = ['devtools/get-maintainer.sh', \
+                  '--email', '-f']
+
+def get_maintainers(libpath):
+    '''Get the maintainers for given library'''
+    try:
+        cmd = get_maintainer + [libpath]
+        result = subprocess.run(cmd, \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        result = None
+
+    if result is not None:
+        email = result.stdout.decode('utf-8')
+        if email == '':
+            email = default_maintainers
+        else:
+            email = list(filter(None,email.split('\n')))
+    else:
+        email = default_maintainers
+
+    return email
+
+def get_message(library, symbols):
+    '''Build email message from symbols, config and maintainers'''
+    message = {}
+    maintainers = get_maintainers(library)
+
+    message['To'] = maintainers
+    if maintainers != default_maintainers:
+        message['CC'] = default_maintainers
+
+    message['Subject'] = 'Expired symbols in {}\n'.format(library)
+
+    body = EMAIL_TEMPLATE
+    for sym in symbols:
+        body += ('{}\n'.format(sym))
+
+    message['Body'] = body
+
+    return message
+
+class OutputEmail():
+    '''Format the output for email'''
+    def __init__(self, config):
+        self.config = config
+
+        self.terminal = OutputTerminal(config)
+        context = ssl.create_default_context()
+
+        # Try to log in to server and send email
+        try:
+            self.server = smtplib.SMTP(config['smtp_server'], 587)
+            self.server.starttls(context=context) # Secure the connection
+            self.server.login(config['sender'], config['password'])
+        except Exception as exception:
+            print(exception)
+            raise exception
+
+    def message(self,message):
+        '''send email'''
+        self.terminal.message(message)
+
+        msg = EmailMessage()
+        msg.set_content(message.pop('Body'))
+
+        for key in message.keys():
+            msg[key] = message[key]
+
+        msg['From'] = self.config['sender']
+        msg['Reply-To'] = 'no-reply@dpdk.org'
+
+        self.server.send_message(msg)
+
+        time.sleep(1)
+
+    def __del__(self):
+        self.server.quit()
+
+class OutputTerminal(): # pylint: disable=too-few-public-methods
+    '''Format the output for the terminal'''
+    def __init__(self, config):
+        self.config = config
+
+    def message(self,message):
+        '''Print email to terminal'''
+        terminal = 'To:' + ', '.join(message['To']) + '\n'
+        if 'sender' in self.config.keys():
+            terminal += 'From:' + self.config['sender'] + '\n'
+
+        terminal += 'Reply-To:' + 'no-reply@dpdk.org' + '\n'
+        if 'CC' in message.keys():
+            terminal += 'CC:' + ', '.join(message['CC']) + '\n'
+
+        terminal += 'Subject:' + message['Subject'] + '\n'
+        terminal += 'Body:' + message['Body'] + '\n'
+
+        print(terminal)
+        print('-' * 80)
+
+def parse_config(args):
+    '''put the command line args in the right places'''
+    config = {}
+    error_msg = None
+
+    outputs = {
+        None : OutputTerminal,
+        'terminal' : OutputTerminal,
+        'email' : OutputEmail
+    }
+
+    if args.format_output == 'email':
+        if args.smtp_server is None:
+            error_msg = 'SMTP server'
+        else:
+            config['smtp_server'] = args.smtp_server
+
+        if args.sender is None:
+            error_msg = 'sender'
+        else:
+            config['sender'] = args.sender
+
+        if args.password is None:
+            error_msg = 'password'
+        else:
+            config['password'] = args.password
+
+    if error_msg is not None:
+        print('Please specify a {} for email output'.format(error_msg))
+        return None
+
+    config['output'] = outputs[args.format_output]
+    return config
+
+def main():
+    '''Main entry point'''
+    parser = argparse.ArgumentParser(description=DESCRIPTION, \
+                                     formatter_class=RawTextHelpFormatter)
+    parser.add_argument('--format-output', choices=['terminal','email'], \
+                        default='terminal')
+    parser.add_argument('--smtp-server')
+    parser.add_argument('--password')
+    parser.add_argument('--sender')
+
+    args = parser.parse_args()
+    config = parse_config(args)
+    if config is None:
+        return
+
+    symbols = []
+    lastlib = library = ''
+
+    output = config['output'](config)
+
+    for line in sys.stdin:
+        line = line.rstrip('\n')
+        library, symbol = [line[:line.find(',')], \
+                           line[line.find(',') + 1: len(line)]]
+        if library == 'mapfile':
+            continue
+
+        if library != lastlib:
+            message = get_message(lastlib, symbols)
+            output.message(message)
+            symbols = []
+
+        lastlib = library
+        symbols = symbols + [symbol]
+
+    #print the last library
+    message = get_message(lastlib, symbols)
+    output.message(message)
+
+if __name__ == '__main__':
+    main()
-- 
2.26.2


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v3 5/5] devtools: test different build types
  @ 2021-08-08 12:51 23%   ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-08 12:51 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, david.marchand, Andrew Rybchenko

All builds were of type debugoptimized.
It is kept only for builds having an ABI check.
Others will have the default build type (release),
except if specified differently as in the x86 generic build
which will be a test of the non-optimized debug build type.
Some static builds will test the minsize build type.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

---

This patch cannot be merged now because it makes clang 11.1.0 crashing.
---
 devtools/test-meson-builds.sh | 19 +++++++++++++------
 1 file changed, 13 insertions(+), 6 deletions(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 9ec8e2bc7e..7bd305a669 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -92,13 +92,16 @@ load_env () # <target compiler>
 	command -v $targetcc >/dev/null 2>&1 || return 1
 }
 
-config () # <dir> <builddir> <meson options>
+config () # <dir> <builddir> <ABI check> <meson options>
 {
 	dir=$1
 	shift
 	builddir=$1
 	shift
+	abicheck=$1
+	shift
 	if [ -f "$builddir/build.ninja" ] ; then
+		[ $abicheck = ABI ] || return 0
 		# for existing environments, switch to debugoptimized if unset
 		# so that ABI checks can run
 		if ! $MESON configure $builddir |
@@ -114,7 +117,9 @@ config () # <dir> <builddir> <meson options>
 	else
 		options="$options -Dexamples=l3fwd" # save disk space
 	fi
-	options="$options --buildtype=debugoptimized"
+	if [ $abicheck = ABI ] ; then
+		options="$options --buildtype=debugoptimized"
+	fi
 	for option in $DPDK_MESON_OPTIONS ; do
 		options="$options -D$option"
 	done
@@ -165,7 +170,7 @@ build () # <directory> <target cc | cross file> <ABI check> [meson options]
 		cross=
 	fi
 	load_env $targetcc || return 0
-	config $srcdir $builds_dir/$targetdir $cross --werror $*
+	config $srcdir $builds_dir/$targetdir $abicheck $cross --werror $*
 	compile $builds_dir/$targetdir
 	if [ -n "$DPDK_ABI_REF_VERSION" -a "$abicheck" = ABI ] ; then
 		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
@@ -179,7 +184,7 @@ build () # <directory> <target cc | cross file> <ABI check> [meson options]
 			fi
 
 			rm -rf $abirefdir/build
-			config $abirefdir/src $abirefdir/build $cross \
+			config $abirefdir/src $abirefdir/build $abicheck $cross \
 				-Dexamples= $*
 			compile $abirefdir/build
 			install_target $abirefdir/build $abirefdir/$targetdir
@@ -211,11 +216,13 @@ for c in gcc clang ; do
 	for s in static shared ; do
 		if [ $s = shared ] ; then
 			abicheck=ABI
+			buildtype=
 		else
 			abicheck=skipABI # save time and disk space
+			buildtype='--buildtype=minsize'
 		fi
 		export CC="$CCACHE $c"
-		build build-$c-$s $c $abicheck --default-library=$s
+		build build-$c-$s $c $abicheck $buildtype --default-library=$s
 		unset CC
 	done
 done
@@ -227,7 +234,7 @@ generic_isa='nehalem'
 if ! check_cc_flags "-march=$generic_isa" ; then
 	generic_isa='corei7'
 fi
-build build-x86-generic cc skipABI -Dcheck_includes=true \
+build build-x86-generic cc skipABI --buildtype=debug -Dcheck_includes=true \
 	-Dlibdir=lib -Dcpu_instruction_set=$generic_isa $use_shared
 
 # 32-bit with default compiler
-- 
2.31.1


^ permalink raw reply	[relevance 23%]

* [dpdk-dev] [dpdk-announce] DPDK 21.08 released
@ 2021-08-08 17:46  3% Thomas Monjalon
  2021-08-08 17:50  0% ` St Leger, Jim
  0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2021-08-08 17:46 UTC (permalink / raw)
  To: announce

A new release is available:
	https://fast.dpdk.org/rel/dpdk-21.08.tar.xz

Summer release numbers:
	922 commits from 159 authors
	1069 files changed, 150746 insertions(+), 85146 deletions(-)

It is not planned to start a maintenance branch for 21.08.
This version is ABI-compatible with 20.11, 21.02 and 21.05.

Below are some new features:
	- Linux auxiliary bus
	- Aarch32 cross-compilation
	- Arm CPPC power management
	- Rx multi-queue monitoring for power management
	- XZ compressed firmware read
	- Marvell CNXK drivers for ethernet, crypto and baseband PHY
	- Wangxun ngbe ethernet driver
	- NVIDIA mlx5 crypto driver supporting AES-XTS
	- ISAL compress support on Arm

More details in the release notes:
	https://doc.dpdk.org/guides/rel_notes/release_21_08.html


There are 30 new contributors (including authors, reviewers and testers).
Welcome to Aakash Sasidharan, Aman Deep Singh, Cheng Liu, Chenglian Sun,
Conor Fogarty, Douglas Flint, Gaoxiang Liu, Ghalem Boudour,
Gordon Noonan, Heng Wang, Henry Nadeau, James Grant, Jeffrey Huang,
Jochen Behrens, John Levon, Lior Margalit, Martin Havlik,
Naga Harish K S V, Nathan Skrzypczak, Owen Hilyard, Paulis Gributs,
Raja Zidane, Rebecca Troy, Rob Scheepens, Rongwei Liu, Shai Brandes,
Srujana Challa, Tudor Cornea, Vanshika Shukla, and Yixue Wang.

Below is the number of commits per employer (with authors count):
	222     Marvell (22)
	183     NVIDIA (26)
	168     Intel (44)
	100     Broadcom (12)
	 45     OKTET Labs (5)
	 36     Huawei (7)
	 35     Arm (7)
	 29     Red Hat (5)
	 20     Trustnet (1)
	 17     6WIND (3)
	 13     Microsoft (2)
	  8     NXP (4)
	  7     Semihalf (1)
	  5     UNH (2)
	  5     PANTHEON.tech (1)
	  4     Chelsio (1)
	  3     IBM (1)

Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
	45     Akhil Goyal <gakhil@marvell.com>
	34     Jerin Jacob <jerinj@marvell.com>
	21     Ruifeng Wang <ruifeng.wang@arm.com>
	20     Ajit Khaparde <ajit.khaparde@broadcom.com>
	19     Matan Azrad <matan@nvidia.com>
	19     Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
	17     Konstantin Ananyev <konstantin.ananyev@intel.com>
	15     Chenbo Xia <chenbo.xia@intel.com>
	14     Maxime Coquelin <maxime.coquelin@redhat.com>
	14     David Marchand <david.marchand@redhat.com>
	13     Viacheslav Ovsiienko <viacheslavo@nvidia.com>
	11     Thomas Monjalon <thomas@monjalon.net>
	 9     Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
	 8     Stephen Hemminger <stephen@networkplumber.org>
	 8     Bruce Richardson <bruce.richardson@intel.com>


DPDK 21.11 will be a big and busy release.
The new features for 21.11 can be submitted during one month:
	http://core.dpdk.org/roadmap#dates
Please share your features roadmap.

Thanks everyone



^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-announce] DPDK 21.08 released
  2021-08-08 17:46  3% [dpdk-dev] [dpdk-announce] DPDK 21.08 released Thomas Monjalon
@ 2021-08-08 17:50  0% ` St Leger, Jim
  0 siblings, 0 replies; 200+ results
From: St Leger, Jim @ 2021-08-08 17:50 UTC (permalink / raw)
  To: dev

Nice work by all! (This release should be called the Olympic Release, out just as the Tokyo 2020 games are concluding.)

Now go off and enjoy some well-earned summer holidays. 

Stay safe,
Jim


> On Aug 8, 2021, at 10:47, Thomas Monjalon <thomas@monjalon.net> wrote:
> 
> A new release is available:
>    https://fast.dpdk.org/rel/dpdk-21.08.tar.xz
> 
> Summer release numbers:
>    922 commits from 159 authors
>    1069 files changed, 150746 insertions(+), 85146 deletions(-)
> 
> It is not planned to start a maintenance branch for 21.08.
> This version is ABI-compatible with 20.11, 21.02 and 21.05.
> 
> Below are some new features:
>    - Linux auxiliary bus
>    - Aarch32 cross-compilation
>    - Arm CPPC power management
>    - Rx multi-queue monitoring for power management
>    - XZ compressed firmware read
>    - Marvell CNXK drivers for ethernet, crypto and baseband PHY
>    - Wangxun ngbe ethernet driver
>    - NVIDIA mlx5 crypto driver supporting AES-XTS
>    - ISAL compress support on Arm
> 
> More details in the release notes:
>    https://doc.dpdk.org/guides/rel_notes/release_21_08.html
> 
> 
> There are 30 new contributors (including authors, reviewers and testers).
> Welcome to Aakash Sasidharan, Aman Deep Singh, Cheng Liu, Chenglian Sun,
> Conor Fogarty, Douglas Flint, Gaoxiang Liu, Ghalem Boudour,
> Gordon Noonan, Heng Wang, Henry Nadeau, James Grant, Jeffrey Huang,
> Jochen Behrens, John Levon, Lior Margalit, Martin Havlik,
> Naga Harish K S V, Nathan Skrzypczak, Owen Hilyard, Paulis Gributs,
> Raja Zidane, Rebecca Troy, Rob Scheepens, Rongwei Liu, Shai Brandes,
> Srujana Challa, Tudor Cornea, Vanshika Shukla, and Yixue Wang.
> 
> Below is the number of commits per employer (with authors count):
>    222     Marvell (22)
>    183     NVIDIA (26)
>    168     Intel (44)
>    100     Broadcom (12)
>     45     OKTET Labs (5)
>     36     Huawei (7)
>     35     Arm (7)
>     29     Red Hat (5)
>     20     Trustnet (1)
>     17     6WIND (3)
>     13     Microsoft (2)
>      8     NXP (4)
>      7     Semihalf (1)
>      5     UNH (2)
>      5     PANTHEON.tech (1)
>      4     Chelsio (1)
>      3     IBM (1)
> 
> Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
>    45     Akhil Goyal <gakhil@marvell.com>
>    34     Jerin Jacob <jerinj@marvell.com>
>    21     Ruifeng Wang <ruifeng.wang@arm.com>
>    20     Ajit Khaparde <ajit.khaparde@broadcom.com>
>    19     Matan Azrad <matan@nvidia.com>
>    19     Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>    17     Konstantin Ananyev <konstantin.ananyev@intel.com>
>    15     Chenbo Xia <chenbo.xia@intel.com>
>    14     Maxime Coquelin <maxime.coquelin@redhat.com>
>    14     David Marchand <david.marchand@redhat.com>
>    13     Viacheslav Ovsiienko <viacheslavo@nvidia.com>
>    11     Thomas Monjalon <thomas@monjalon.net>
>     9     Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
>     8     Stephen Hemminger <stephen@networkplumber.org>
>     8     Bruce Richardson <bruce.richardson@intel.com>
> 
> 
> DPDK 21.11 will be a big and busy release.
> The new features for 21.11 can be submitted during one month:
>    http://core.dpdk.org/roadmap#dates
> Please share your features roadmap.
> 
> Thanks everyone
> 
> 

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] version: 21.11-rc0
@ 2021-08-08 19:26 11% Thomas Monjalon
  2021-08-12 14:36  0% ` Ferruh Yigit
  2021-08-17  6:34  4% ` [dpdk-dev] " David Marchand
  0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2021-08-08 19:26 UTC (permalink / raw)
  To: dev; +Cc: david.marchand, mdr

Start a new release cycle with empty release notes.

The ABI version becomes 22.0.
The map files are updated to the new ABI major number (22).
The ABI exceptions are dropped
and CI ABI checks are disabled
because compatibility is not preserved.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
 .github/workflows/build.yml                |   4 +-
 .travis.yml                                |  21 +---
 ABI_VERSION                                |   2 +-
 VERSION                                    |   2 +-
 devtools/libabigail.abignore               |  41 -------
 doc/guides/rel_notes/index.rst             |   1 +
 doc/guides/rel_notes/release_21_11.rst     | 136 +++++++++++++++++++++
 drivers/baseband/acc100/version.map        |   2 +-
 drivers/baseband/fpga_5gnr_fec/version.map |   2 +-
 drivers/baseband/fpga_lte_fec/version.map  |   2 +-
 drivers/baseband/null/version.map          |   2 +-
 drivers/baseband/turbo_sw/version.map      |   2 +-
 drivers/bus/ifpga/version.map              |   2 +-
 drivers/bus/pci/version.map                |   2 +-
 drivers/bus/vdev/version.map               |   2 +-
 drivers/bus/vmbus/version.map              |   2 +-
 drivers/common/qat/version.map             |   2 +-
 drivers/compress/isal/version.map          |   2 +-
 drivers/compress/mlx5/version.map          |   2 +-
 drivers/compress/octeontx/version.map      |   2 +-
 drivers/compress/zlib/version.map          |   2 +-
 drivers/crypto/aesni_gcm/version.map       |   2 +-
 drivers/crypto/aesni_mb/version.map        |   2 +-
 drivers/crypto/armv8/version.map           |   2 +-
 drivers/crypto/bcmfs/version.map           |   2 +-
 drivers/crypto/caam_jr/version.map         |   2 +-
 drivers/crypto/ccp/version.map             |   2 +-
 drivers/crypto/kasumi/version.map          |   2 +-
 drivers/crypto/mlx5/version.map            |   2 +-
 drivers/crypto/mvsam/version.map           |   2 +-
 drivers/crypto/nitrox/version.map          |   2 +-
 drivers/crypto/null/version.map            |   2 +-
 drivers/crypto/octeontx/version.map        |   2 +-
 drivers/crypto/octeontx2/version.map       |   2 +-
 drivers/crypto/openssl/version.map         |   2 +-
 drivers/crypto/scheduler/version.map       |   2 +-
 drivers/crypto/snow3g/version.map          |   2 +-
 drivers/crypto/virtio/version.map          |   2 +-
 drivers/crypto/zuc/version.map             |   2 +-
 drivers/event/dlb2/version.map             |   2 +-
 drivers/event/dpaa/version.map             |   2 +-
 drivers/event/dpaa2/version.map            |   2 +-
 drivers/event/dsw/version.map              |   2 +-
 drivers/event/octeontx/version.map         |   2 +-
 drivers/event/octeontx2/version.map        |   2 +-
 drivers/event/opdl/version.map             |   2 +-
 drivers/event/skeleton/version.map         |   2 +-
 drivers/event/sw/version.map               |   2 +-
 drivers/mempool/bucket/version.map         |   2 +-
 drivers/mempool/dpaa2/version.map          |   2 +-
 drivers/mempool/octeontx/version.map       |   2 +-
 drivers/mempool/ring/version.map           |   2 +-
 drivers/mempool/stack/version.map          |   2 +-
 drivers/net/af_packet/version.map          |   2 +-
 drivers/net/af_xdp/version.map             |   2 +-
 drivers/net/ark/version.map                |   2 +-
 drivers/net/atlantic/version.map           |   2 +-
 drivers/net/avp/version.map                |   2 +-
 drivers/net/axgbe/version.map              |   2 +-
 drivers/net/bnx2x/version.map              |   2 +-
 drivers/net/bnxt/version.map               |   2 +-
 drivers/net/bonding/version.map            |   2 +-
 drivers/net/cnxk/version.map               |   2 +-
 drivers/net/cxgbe/version.map              |   2 +-
 drivers/net/dpaa/version.map               |   2 +-
 drivers/net/e1000/version.map              |   2 +-
 drivers/net/ena/version.map                |   2 +-
 drivers/net/enetc/version.map              |   2 +-
 drivers/net/enic/version.map               |   2 +-
 drivers/net/failsafe/version.map           |   2 +-
 drivers/net/fm10k/version.map              |   2 +-
 drivers/net/hinic/version.map              |   2 +-
 drivers/net/hns3/version.map               |   2 +-
 drivers/net/i40e/version.map               |   2 +-
 drivers/net/iavf/version.map               |   2 +-
 drivers/net/ice/version.map                |   2 +-
 drivers/net/igc/version.map                |   2 +-
 drivers/net/ionic/version.map              |   2 +-
 drivers/net/ipn3ke/version.map             |   2 +-
 drivers/net/ixgbe/version.map              |   2 +-
 drivers/net/kni/version.map                |   2 +-
 drivers/net/liquidio/version.map           |   2 +-
 drivers/net/memif/version.map              |   2 +-
 drivers/net/mlx4/version.map               |   2 +-
 drivers/net/mlx5/version.map               |   2 +-
 drivers/net/mvneta/version.map             |   2 +-
 drivers/net/mvpp2/version.map              |   2 +-
 drivers/net/netvsc/version.map             |   2 +-
 drivers/net/nfb/version.map                |   2 +-
 drivers/net/nfp/version.map                |   2 +-
 drivers/net/ngbe/version.map               |   2 +-
 drivers/net/null/version.map               |   2 +-
 drivers/net/octeontx/version.map           |   2 +-
 drivers/net/octeontx2/version.map          |   2 +-
 drivers/net/octeontx_ep/version.map        |   4 +-
 drivers/net/pcap/version.map               |   2 +-
 drivers/net/pfe/version.map                |   2 +-
 drivers/net/qede/version.map               |   2 +-
 drivers/net/ring/version.map               |   2 +-
 drivers/net/sfc/version.map                |   2 +-
 drivers/net/softnic/version.map            |   2 +-
 drivers/net/szedata2/version.map           |   2 +-
 drivers/net/tap/version.map                |   2 +-
 drivers/net/thunderx/version.map           |   2 +-
 drivers/net/txgbe/version.map              |   2 +-
 drivers/net/vdev_netvsc/version.map        |   2 +-
 drivers/net/vhost/version.map              |   2 +-
 drivers/net/virtio/version.map             |   2 +-
 drivers/net/vmxnet3/version.map            |   2 +-
 drivers/raw/cnxk_bphy/version.map          |   2 +-
 drivers/raw/dpaa2_cmdif/version.map        |   2 +-
 drivers/raw/dpaa2_qdma/version.map         |   2 +-
 drivers/raw/ifpga/version.map              |   2 +-
 drivers/raw/ioat/version.map               |   2 +-
 drivers/raw/ntb/version.map                |   2 +-
 drivers/raw/octeontx2_dma/version.map      |   2 +-
 drivers/raw/octeontx2_ep/version.map       |   2 +-
 drivers/raw/skeleton/version.map           |   2 +-
 drivers/regex/mlx5/version.map             |   2 +-
 drivers/regex/octeontx2/version.map        |   2 +-
 drivers/vdpa/ifc/version.map               |   2 +-
 drivers/vdpa/mlx5/version.map              |   2 +-
 lib/acl/version.map                        |   2 +-
 lib/bitratestats/version.map               |   2 +-
 lib/bpf/version.map                        |   2 +-
 lib/cfgfile/version.map                    |   2 +-
 lib/cmdline/version.map                    |   2 +-
 lib/cryptodev/version.map                  |   2 +-
 lib/distributor/version.map                |   2 +-
 lib/eal/version.map                        |   2 +-
 lib/efd/version.map                        |   2 +-
 lib/ethdev/version.map                     |   2 +-
 lib/eventdev/version.map                   |   2 +-
 lib/gro/version.map                        |   2 +-
 lib/gso/version.map                        |   2 +-
 lib/hash/version.map                       |   2 +-
 lib/ip_frag/version.map                    |   2 +-
 lib/ipsec/version.map                      |   2 +-
 lib/jobstats/version.map                   |   2 +-
 lib/kni/version.map                        |   2 +-
 lib/kvargs/version.map                     |   2 +-
 lib/latencystats/version.map               |   2 +-
 lib/lpm/version.map                        |   2 +-
 lib/mbuf/version.map                       |   2 +-
 lib/member/version.map                     |   2 +-
 lib/mempool/version.map                    |   2 +-
 lib/meter/version.map                      |   2 +-
 lib/metrics/version.map                    |   2 +-
 lib/net/version.map                        |   2 +-
 lib/pci/version.map                        |   2 +-
 lib/pdump/version.map                      |   2 +-
 lib/pipeline/version.map                   |   2 +-
 lib/port/version.map                       |   2 +-
 lib/power/version.map                      |   2 +-
 lib/rawdev/version.map                     |   2 +-
 lib/rcu/version.map                        |   2 +-
 lib/reorder/version.map                    |   2 +-
 lib/ring/version.map                       |   2 +-
 lib/sched/version.map                      |   2 +-
 lib/security/version.map                   |   2 +-
 lib/stack/version.map                      |   2 +-
 lib/table/version.map                      |   2 +-
 lib/timer/version.map                      |   2 +-
 lib/vhost/version.map                      |  40 +++---
 164 files changed, 319 insertions(+), 242 deletions(-)
 create mode 100644 doc/guides/rel_notes/release_21_11.rst

diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 7dac20ddeb..151641e6fa 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -21,7 +21,7 @@ jobs:
       CC: ccache ${{ matrix.config.compiler }}
       DEF_LIB: ${{ matrix.config.library }}
       LIBABIGAIL_VERSION: libabigail-1.8
-      REF_GIT_TAG: v21.05
+      REF_GIT_TAG: none
       RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
 
     strategy:
@@ -34,7 +34,7 @@ jobs:
           - os: ubuntu-18.04
             compiler: gcc
             library: shared
-            checks: abi+doc+tests
+            checks: doc+tests
           - os: ubuntu-18.04
             compiler: clang
             library: static
diff --git a/.travis.yml b/.travis.yml
index 23067d9e3c..4bb5bf629e 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -42,7 +42,7 @@ script: ./.ci/${TRAVIS_OS_NAME}-build.sh
 env:
   global:
     - LIBABIGAIL_VERSION=libabigail-1.8
-    - REF_GIT_TAG=v21.05
+    - REF_GIT_TAG=none
 
 jobs:
   include:
@@ -61,14 +61,6 @@ jobs:
         packages:
           - *required_packages
           - *doc_packages
-  - env: DEF_LIB="shared" ABI_CHECKS=true
-    arch: amd64
-    compiler: gcc
-    addons:
-      apt:
-        packages:
-          - *required_packages
-          - *libabigail_build_packages
   # x86_64 clang jobs
   - env: DEF_LIB="static"
     arch: amd64
@@ -145,17 +137,6 @@ jobs:
         packages:
           - *required_packages
           - *doc_packages
-  - env: DEF_LIB="shared" ABI_CHECKS=true
-    dist: focal
-    arch: arm64-graviton2
-    virt: vm
-    group: edge
-    compiler: gcc
-    addons:
-      apt:
-        packages:
-          - *required_packages
-          - *libabigail_build_packages
   # aarch64 clang jobs
   - env: DEF_LIB="static"
     dist: focal
diff --git a/ABI_VERSION b/ABI_VERSION
index 8e5954eb6f..b090fe57f6 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-21.3
+22.0
diff --git a/VERSION b/VERSION
index 6512890184..0931886fb0 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-21.08.0
+21.11.0-rc0
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 93158405e0..4b676f317d 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -11,44 +11,3 @@
 ; Ignore generated PMD information strings
 [suppress_variable]
         name_regexp = _pmd_info$
-
-; Explicit ignore for driver-only ABI
-[suppress_function]
-        name_regexp = rte_vdev_(|un)register
-
-; Ignore fields inserted in cacheline boundary of rte_cryptodev
-[suppress_type]
-        name = rte_cryptodev
-        has_data_member_inserted_between = {offset_after(attached), end}
-
-; Ignore fields inserted in union boundary of rte_cryptodev_symmetric_capability
-[suppress_type]
-        name = rte_cryptodev_symmetric_capability
-        has_data_member_inserted_between = {offset_after(cipher.iv_size), end}
-
-; Ignore fields inserted in middle padding of rte_crypto_cipher_xform
-[suppress_type]
-        name = rte_crypto_cipher_xform
-        has_data_member_inserted_between = {offset_after(key), offset_of(iv)}
-
-; Ignore fields inserted in place of reserved fields of rte_eventdev
-[suppress_type]
-	name = rte_eventdev
-	has_data_member_inserted_between = {offset_after(attached), end}
-
-; Ignore fields inserted in alignment hole of rte_eth_rxq_info
-[suppress_type]
-	name = rte_eth_rxq_info
-	has_data_member_inserted_at = offset_after(scattered_rx)
-
-; Ignore fields inserted in cacheline boundary of rte_eth_txq_info
-[suppress_type]
-	name = rte_eth_txq_info
-	has_data_member_inserted_between = {offset_after(nb_desc), end}
-
-; Ignore all changes to rte_eth_dev_data
-; Note: we only cared about dev_configured bit addition, but libabigail
-; seems to wrongly compute bitfields offset.
-; https://sourceware.org/bugzilla/show_bug.cgi?id=28060
-[suppress_type]
-	name = rte_eth_dev_data
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 9648ba60e1..78861ee57b 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
     :maxdepth: 1
     :numbered:
 
+    release_21_11
     release_21_08
     release_21_05
     release_21_02
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
new file mode 100644
index 0000000000..d707a554ef
--- /dev/null
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -0,0 +1,136 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+   Copyright 2021 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 21.11
+==================
+
+.. **Read this first.**
+
+   The text in the sections below explains how to update the release notes.
+
+   Use proper spelling, capitalization and punctuation in all sections.
+
+   Variable and config names should be quoted as fixed width text:
+   ``LIKE_THIS``.
+
+   Build the docs and view the output file to ensure the changes are correct::
+
+      make doc-guides-html
+      xdg-open build/doc/html/guides/rel_notes/release_21_11.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+   Sample format:
+
+   * **Add a title in the past tense with a full stop.**
+
+     Add a short 1-2 sentence description in the past tense.
+     The description should be enough to allow someone scanning
+     the release notes to understand the new feature.
+
+     If the feature adds a lot of sub-features you can use a bullet list
+     like this:
+
+     * Added feature foo to do something.
+     * Enhanced feature bar to do something else.
+
+     Refer to the previous release notes for examples.
+
+     Suggested order in release notes items:
+     * Core libs (EAL, mempool, ring, mbuf, buses)
+     * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+       - ethdev (lib, PMDs)
+       - cryptodev (lib, PMDs)
+       - eventdev (lib, PMDs)
+       - etc
+     * Other libs
+     * Apps, Examples, Tools (if significant)
+
+     This section is a comment. Do not overwrite or remove it.
+     Also, make sure to start the actual text at the margin.
+     =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+   * Add a short 1-2 sentence description of the removed item
+     in the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the API change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+   * sample: Add a short 1-2 sentence description of the ABI change
+     which was announced in the previous releases and made in this release.
+     Start with a scope label like "ethdev:".
+     Use fixed width quotes for ``function_names`` or ``struct_names``.
+     Use the past tense.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+   * **Add title in present tense with full stop.**
+
+     Add a short 1-2 sentence description of the known issue
+     in the present tense. Add information on any known workarounds.
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+   with this release.
+
+   The format is:
+
+   * <vendor> platform with <vendor> <type of devices> combinations
+
+     * List of CPU
+     * List of OS
+     * List of devices
+     * Other relevant details...
+
+   This section is a comment. Do not overwrite or remove it.
+   Also, make sure to start the actual text at the margin.
+   =======================================================
diff --git a/drivers/baseband/acc100/version.map b/drivers/baseband/acc100/version.map
index 47a23b8dac..40604c73d2 100644
--- a/drivers/baseband/acc100/version.map
+++ b/drivers/baseband/acc100/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/baseband/fpga_5gnr_fec/version.map b/drivers/baseband/fpga_5gnr_fec/version.map
index db43cd8403..de4e5025bf 100644
--- a/drivers/baseband/fpga_5gnr_fec/version.map
+++ b/drivers/baseband/fpga_5gnr_fec/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/baseband/fpga_lte_fec/version.map b/drivers/baseband/fpga_lte_fec/version.map
index b95b7838e8..e3bfa6edb0 100644
--- a/drivers/baseband/fpga_lte_fec/version.map
+++ b/drivers/baseband/fpga_lte_fec/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/baseband/null/version.map b/drivers/baseband/null/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/baseband/null/version.map
+++ b/drivers/baseband/null/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/baseband/turbo_sw/version.map b/drivers/baseband/turbo_sw/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/baseband/turbo_sw/version.map
+++ b/drivers/baseband/turbo_sw/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/bus/ifpga/version.map b/drivers/bus/ifpga/version.map
index 6e8f85da3c..8ac3a4d258 100644
--- a/drivers/bus/ifpga/version.map
+++ b/drivers/bus/ifpga/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_ifpga_driver_register;
diff --git a/drivers/bus/pci/version.map b/drivers/bus/pci/version.map
index 00fac8864c..aa56439c2b 100644
--- a/drivers/bus/pci/version.map
+++ b/drivers/bus/pci/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_pci_dump;
diff --git a/drivers/bus/vdev/version.map b/drivers/bus/vdev/version.map
index 61b6cefcee..0d60b7e2bc 100644
--- a/drivers/bus/vdev/version.map
+++ b/drivers/bus/vdev/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_vdev_add_custom_scan;
diff --git a/drivers/bus/vmbus/version.map b/drivers/bus/vmbus/version.map
index fa8e91c282..3cadec7fae 100644
--- a/drivers/bus/vmbus/version.map
+++ b/drivers/bus/vmbus/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_vmbus_chan_close;
diff --git a/drivers/common/qat/version.map b/drivers/common/qat/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/common/qat/version.map
+++ b/drivers/common/qat/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/compress/isal/version.map b/drivers/compress/isal/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/compress/isal/version.map
+++ b/drivers/compress/isal/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/compress/mlx5/version.map b/drivers/compress/mlx5/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/compress/mlx5/version.map
+++ b/drivers/compress/mlx5/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/compress/octeontx/version.map b/drivers/compress/octeontx/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/compress/octeontx/version.map
+++ b/drivers/compress/octeontx/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/compress/zlib/version.map b/drivers/compress/zlib/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/compress/zlib/version.map
+++ b/drivers/compress/zlib/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/aesni_gcm/version.map b/drivers/crypto/aesni_gcm/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/aesni_gcm/version.map
+++ b/drivers/crypto/aesni_gcm/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/aesni_mb/version.map b/drivers/crypto/aesni_mb/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/aesni_mb/version.map
+++ b/drivers/crypto/aesni_mb/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/armv8/version.map b/drivers/crypto/armv8/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/armv8/version.map
+++ b/drivers/crypto/armv8/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/bcmfs/version.map b/drivers/crypto/bcmfs/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/bcmfs/version.map
+++ b/drivers/crypto/bcmfs/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/caam_jr/version.map b/drivers/crypto/caam_jr/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/caam_jr/version.map
+++ b/drivers/crypto/caam_jr/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/ccp/version.map b/drivers/crypto/ccp/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/ccp/version.map
+++ b/drivers/crypto/ccp/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/kasumi/version.map b/drivers/crypto/kasumi/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/kasumi/version.map
+++ b/drivers/crypto/kasumi/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/mlx5/version.map b/drivers/crypto/mlx5/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/mlx5/version.map
+++ b/drivers/crypto/mlx5/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/mvsam/version.map b/drivers/crypto/mvsam/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/mvsam/version.map
+++ b/drivers/crypto/mvsam/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/nitrox/version.map b/drivers/crypto/nitrox/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/nitrox/version.map
+++ b/drivers/crypto/nitrox/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/null/version.map b/drivers/crypto/null/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/null/version.map
+++ b/drivers/crypto/null/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/octeontx/version.map b/drivers/crypto/octeontx/version.map
index 41f33a4ecf..997a95ea33 100644
--- a/drivers/crypto/octeontx/version.map
+++ b/drivers/crypto/octeontx/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/crypto/octeontx2/version.map b/drivers/crypto/octeontx2/version.map
index 02684781b3..d36663132a 100644
--- a/drivers/crypto/octeontx2/version.map
+++ b/drivers/crypto/octeontx2/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/crypto/openssl/version.map b/drivers/crypto/openssl/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/openssl/version.map
+++ b/drivers/crypto/openssl/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/scheduler/version.map b/drivers/crypto/scheduler/version.map
index ab7d505629..47e4487b75 100644
--- a/drivers/crypto/scheduler/version.map
+++ b/drivers/crypto/scheduler/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_cryptodev_scheduler_load_user_scheduler;
diff --git a/drivers/crypto/snow3g/version.map b/drivers/crypto/snow3g/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/snow3g/version.map
+++ b/drivers/crypto/snow3g/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/virtio/version.map b/drivers/crypto/virtio/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/virtio/version.map
+++ b/drivers/crypto/virtio/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/crypto/zuc/version.map b/drivers/crypto/zuc/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/crypto/zuc/version.map
+++ b/drivers/crypto/zuc/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/event/dlb2/version.map b/drivers/event/dlb2/version.map
index b1e4dff0ff..c727207d1a 100644
--- a/drivers/event/dlb2/version.map
+++ b/drivers/event/dlb2/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/event/dpaa/version.map b/drivers/event/dpaa/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/event/dpaa/version.map
+++ b/drivers/event/dpaa/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/event/dpaa2/version.map b/drivers/event/dpaa2/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/event/dpaa2/version.map
+++ b/drivers/event/dpaa2/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/event/dsw/version.map b/drivers/event/dsw/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/event/dsw/version.map
+++ b/drivers/event/dsw/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/event/octeontx/version.map b/drivers/event/octeontx/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/event/octeontx/version.map
+++ b/drivers/event/octeontx/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/event/octeontx2/version.map b/drivers/event/octeontx2/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/event/octeontx2/version.map
+++ b/drivers/event/octeontx2/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/event/opdl/version.map b/drivers/event/opdl/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/event/opdl/version.map
+++ b/drivers/event/opdl/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/event/skeleton/version.map b/drivers/event/skeleton/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/event/skeleton/version.map
+++ b/drivers/event/skeleton/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/event/sw/version.map b/drivers/event/sw/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/event/sw/version.map
+++ b/drivers/event/sw/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/mempool/bucket/version.map b/drivers/mempool/bucket/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/mempool/bucket/version.map
+++ b/drivers/mempool/bucket/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/mempool/dpaa2/version.map b/drivers/mempool/dpaa2/version.map
index 473b8c90e8..49c460ec54 100644
--- a/drivers/mempool/dpaa2/version.map
+++ b/drivers/mempool/dpaa2/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_dpaa2_mbuf_from_buf_addr;
diff --git a/drivers/mempool/octeontx/version.map b/drivers/mempool/octeontx/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/mempool/octeontx/version.map
+++ b/drivers/mempool/octeontx/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/mempool/ring/version.map b/drivers/mempool/ring/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/mempool/ring/version.map
+++ b/drivers/mempool/ring/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/mempool/stack/version.map b/drivers/mempool/stack/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/mempool/stack/version.map
+++ b/drivers/mempool/stack/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/af_packet/version.map b/drivers/net/af_packet/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/af_packet/version.map
+++ b/drivers/net/af_packet/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/af_xdp/version.map b/drivers/net/af_xdp/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/af_xdp/version.map
+++ b/drivers/net/af_xdp/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/ark/version.map b/drivers/net/ark/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/ark/version.map
+++ b/drivers/net/ark/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/atlantic/version.map b/drivers/net/atlantic/version.map
index 6e17832684..d36fc61a84 100644
--- a/drivers/net/atlantic/version.map
+++ b/drivers/net/atlantic/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/net/avp/version.map b/drivers/net/avp/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/avp/version.map
+++ b/drivers/net/avp/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/axgbe/version.map b/drivers/net/axgbe/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/axgbe/version.map
+++ b/drivers/net/axgbe/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/bnx2x/version.map b/drivers/net/bnx2x/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/bnx2x/version.map
+++ b/drivers/net/bnx2x/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/bnxt/version.map b/drivers/net/bnxt/version.map
index a050d86ab7..2ba5ec5f6e 100644
--- a/drivers/net/bnxt/version.map
+++ b/drivers/net/bnxt/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_pmd_bnxt_get_vf_rx_status;
diff --git a/drivers/net/bonding/version.map b/drivers/net/bonding/version.map
index df81ee74c1..d7142c4f94 100644
--- a/drivers/net/bonding/version.map
+++ b/drivers/net/bonding/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_eth_bond_8023ad_agg_selection_get;
diff --git a/drivers/net/cnxk/version.map b/drivers/net/cnxk/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/cnxk/version.map
+++ b/drivers/net/cnxk/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/cxgbe/version.map b/drivers/net/cxgbe/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/cxgbe/version.map
+++ b/drivers/net/cxgbe/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/dpaa/version.map b/drivers/net/dpaa/version.map
index 87ce8f5b6c..338ea2d8b2 100644
--- a/drivers/net/dpaa/version.map
+++ b/drivers/net/dpaa/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_pmd_dpaa_set_tx_loopback;
diff --git a/drivers/net/e1000/version.map b/drivers/net/e1000/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/e1000/version.map
+++ b/drivers/net/e1000/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/ena/version.map b/drivers/net/ena/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/ena/version.map
+++ b/drivers/net/ena/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/enetc/version.map b/drivers/net/enetc/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/enetc/version.map
+++ b/drivers/net/enetc/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/enic/version.map b/drivers/net/enic/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/enic/version.map
+++ b/drivers/net/enic/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/failsafe/version.map b/drivers/net/failsafe/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/failsafe/version.map
+++ b/drivers/net/failsafe/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/fm10k/version.map b/drivers/net/fm10k/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/fm10k/version.map
+++ b/drivers/net/fm10k/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/hinic/version.map b/drivers/net/hinic/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/hinic/version.map
+++ b/drivers/net/hinic/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/hns3/version.map b/drivers/net/hns3/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/hns3/version.map
+++ b/drivers/net/hns3/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/i40e/version.map b/drivers/net/i40e/version.map
index 413c58cb21..5dd68158d3 100644
--- a/drivers/net/i40e/version.map
+++ b/drivers/net/i40e/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_pmd_i40e_add_vf_mac_addr;
diff --git a/drivers/net/iavf/version.map b/drivers/net/iavf/version.map
index 2a411da2e9..f3efe756cf 100644
--- a/drivers/net/iavf/version.map
+++ b/drivers/net/iavf/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/net/ice/version.map b/drivers/net/ice/version.map
index 632a296a0c..cc837f1c00 100644
--- a/drivers/net/ice/version.map
+++ b/drivers/net/ice/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/net/igc/version.map b/drivers/net/igc/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/igc/version.map
+++ b/drivers/net/igc/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/ionic/version.map b/drivers/net/ionic/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/ionic/version.map
+++ b/drivers/net/ionic/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/ipn3ke/version.map b/drivers/net/ipn3ke/version.map
index d8cc1026e0..568ce32e88 100644
--- a/drivers/net/ipn3ke/version.map
+++ b/drivers/net/ipn3ke/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/net/ixgbe/version.map b/drivers/net/ixgbe/version.map
index 9402802b04..bca5cc5826 100644
--- a/drivers/net/ixgbe/version.map
+++ b/drivers/net/ixgbe/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_pmd_ixgbe_bypass_event_show;
diff --git a/drivers/net/kni/version.map b/drivers/net/kni/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/kni/version.map
+++ b/drivers/net/kni/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/liquidio/version.map b/drivers/net/liquidio/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/liquidio/version.map
+++ b/drivers/net/liquidio/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/memif/version.map b/drivers/net/memif/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/memif/version.map
+++ b/drivers/net/memif/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/mlx4/version.map b/drivers/net/mlx4/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/mlx4/version.map
+++ b/drivers/net/mlx4/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/mlx5/version.map b/drivers/net/mlx5/version.map
index 82a32b53da..0af7a12488 100644
--- a/drivers/net/mlx5/version.map
+++ b/drivers/net/mlx5/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/net/mvneta/version.map b/drivers/net/mvneta/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/mvneta/version.map
+++ b/drivers/net/mvneta/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/mvpp2/version.map b/drivers/net/mvpp2/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/mvpp2/version.map
+++ b/drivers/net/mvpp2/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/netvsc/version.map b/drivers/net/netvsc/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/netvsc/version.map
+++ b/drivers/net/netvsc/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/nfb/version.map b/drivers/net/nfb/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/nfb/version.map
+++ b/drivers/net/nfb/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/nfp/version.map b/drivers/net/nfp/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/nfp/version.map
+++ b/drivers/net/nfp/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/ngbe/version.map b/drivers/net/ngbe/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/ngbe/version.map
+++ b/drivers/net/ngbe/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/null/version.map b/drivers/net/null/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/null/version.map
+++ b/drivers/net/null/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/octeontx/version.map b/drivers/net/octeontx/version.map
index 6dda72890c..d12156778e 100644
--- a/drivers/net/octeontx/version.map
+++ b/drivers/net/octeontx/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_octeontx_pchan_map;
diff --git a/drivers/net/octeontx2/version.map b/drivers/net/octeontx2/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/octeontx2/version.map
+++ b/drivers/net/octeontx2/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/octeontx_ep/version.map b/drivers/net/octeontx_ep/version.map
index 6e4fb220ac..c2e0723b4c 100644
--- a/drivers/net/octeontx_ep/version.map
+++ b/drivers/net/octeontx_ep/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
-        local: *;
+DPDK_22 {
+	local: *;
 };
diff --git a/drivers/net/pcap/version.map b/drivers/net/pcap/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/pcap/version.map
+++ b/drivers/net/pcap/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/pfe/version.map b/drivers/net/pfe/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/pfe/version.map
+++ b/drivers/net/pfe/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/qede/version.map b/drivers/net/qede/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/qede/version.map
+++ b/drivers/net/qede/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/ring/version.map b/drivers/net/ring/version.map
index 29770fe3e4..e43f5ea908 100644
--- a/drivers/net/ring/version.map
+++ b/drivers/net/ring/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_eth_from_ring;
diff --git a/drivers/net/sfc/version.map b/drivers/net/sfc/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/sfc/version.map
+++ b/drivers/net/sfc/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/softnic/version.map b/drivers/net/softnic/version.map
index 530d2e6b72..6784318f77 100644
--- a/drivers/net/softnic/version.map
+++ b/drivers/net/softnic/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_pmd_softnic_run;
diff --git a/drivers/net/szedata2/version.map b/drivers/net/szedata2/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/szedata2/version.map
+++ b/drivers/net/szedata2/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/tap/version.map b/drivers/net/tap/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/tap/version.map
+++ b/drivers/net/tap/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/thunderx/version.map b/drivers/net/thunderx/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/thunderx/version.map
+++ b/drivers/net/thunderx/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/txgbe/version.map b/drivers/net/txgbe/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/txgbe/version.map
+++ b/drivers/net/txgbe/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/vdev_netvsc/version.map b/drivers/net/vdev_netvsc/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/vdev_netvsc/version.map
+++ b/drivers/net/vdev_netvsc/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/vhost/version.map b/drivers/net/vhost/version.map
index 634255829e..1aa8abef75 100644
--- a/drivers/net/vhost/version.map
+++ b/drivers/net/vhost/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_eth_vhost_get_queue_event;
diff --git a/drivers/net/virtio/version.map b/drivers/net/virtio/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/virtio/version.map
+++ b/drivers/net/virtio/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/net/vmxnet3/version.map b/drivers/net/vmxnet3/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/net/vmxnet3/version.map
+++ b/drivers/net/vmxnet3/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/raw/cnxk_bphy/version.map b/drivers/raw/cnxk_bphy/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/raw/cnxk_bphy/version.map
+++ b/drivers/raw/cnxk_bphy/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/raw/dpaa2_cmdif/version.map b/drivers/raw/dpaa2_cmdif/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/raw/dpaa2_cmdif/version.map
+++ b/drivers/raw/dpaa2_cmdif/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/raw/dpaa2_qdma/version.map b/drivers/raw/dpaa2_qdma/version.map
index 9130383ab8..441918d55e 100644
--- a/drivers/raw/dpaa2_qdma/version.map
+++ b/drivers/raw/dpaa2_qdma/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_qdma_vq_stats;
diff --git a/drivers/raw/ifpga/version.map b/drivers/raw/ifpga/version.map
index 995c419a9b..a1a6be25a9 100644
--- a/drivers/raw/ifpga/version.map
+++ b/drivers/raw/ifpga/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
 
diff --git a/drivers/raw/ioat/version.map b/drivers/raw/ioat/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/raw/ioat/version.map
+++ b/drivers/raw/ioat/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/raw/ntb/version.map b/drivers/raw/ntb/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/raw/ntb/version.map
+++ b/drivers/raw/ntb/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/raw/octeontx2_dma/version.map b/drivers/raw/octeontx2_dma/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/raw/octeontx2_dma/version.map
+++ b/drivers/raw/octeontx2_dma/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/raw/octeontx2_ep/version.map b/drivers/raw/octeontx2_ep/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/raw/octeontx2_ep/version.map
+++ b/drivers/raw/octeontx2_ep/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/raw/skeleton/version.map b/drivers/raw/skeleton/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/raw/skeleton/version.map
+++ b/drivers/raw/skeleton/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/regex/mlx5/version.map b/drivers/regex/mlx5/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/regex/mlx5/version.map
+++ b/drivers/regex/mlx5/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/regex/octeontx2/version.map b/drivers/regex/octeontx2/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/regex/octeontx2/version.map
+++ b/drivers/regex/octeontx2/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/vdpa/ifc/version.map b/drivers/vdpa/ifc/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/vdpa/ifc/version.map
+++ b/drivers/vdpa/ifc/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/drivers/vdpa/mlx5/version.map b/drivers/vdpa/mlx5/version.map
index 4a76d1d52d..c2e0723b4c 100644
--- a/drivers/vdpa/mlx5/version.map
+++ b/drivers/vdpa/mlx5/version.map
@@ -1,3 +1,3 @@
-DPDK_21 {
+DPDK_22 {
 	local: *;
 };
diff --git a/lib/acl/version.map b/lib/acl/version.map
index d97f2927bf..2b18c21601 100644
--- a/lib/acl/version.map
+++ b/lib/acl/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_acl_add_rules;
diff --git a/lib/bitratestats/version.map b/lib/bitratestats/version.map
index 152730bb4e..c15e34d82c 100644
--- a/lib/bitratestats/version.map
+++ b/lib/bitratestats/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_stats_bitrate_calc;
diff --git a/lib/bpf/version.map b/lib/bpf/version.map
index b75a0034bc..0bf35f4876 100644
--- a/lib/bpf/version.map
+++ b/lib/bpf/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_bpf_destroy;
diff --git a/lib/cfgfile/version.map b/lib/cfgfile/version.map
index 180c42b717..02cbccb8ab 100644
--- a/lib/cfgfile/version.map
+++ b/lib/cfgfile/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_cfgfile_add_entry;
diff --git a/lib/cmdline/version.map b/lib/cmdline/version.map
index 9df0272152..980adb4f23 100644
--- a/lib/cmdline/version.map
+++ b/lib/cmdline/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	cirbuf_add_buf_head;
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 9f04737aed..979d823a7c 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_crypto_aead_algorithm_strings;
diff --git a/lib/distributor/version.map b/lib/distributor/version.map
index 1ddcd01fe6..4d9ff07373 100644
--- a/lib/distributor/version.map
+++ b/lib/distributor/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_distributor_clear_returns;
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 887012d02a..beeb986adc 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	__rte_panic;
diff --git a/lib/efd/version.map b/lib/efd/version.map
index 425c0a85a9..0226285245 100644
--- a/lib/efd/version.map
+++ b/lib/efd/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_efd_create;
diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
index 44d30b05ae..3eece75b72 100644
--- a/lib/ethdev/version.map
+++ b/lib/ethdev/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_eth_add_first_rx_callback;
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 7e264d3b8d..88625621ec 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_event_crypto_adapter_caps_get;
diff --git a/lib/gro/version.map b/lib/gro/version.map
index 19dc66b0d4..f8a32e221c 100644
--- a/lib/gro/version.map
+++ b/lib/gro/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_gro_ctx_create;
diff --git a/lib/gso/version.map b/lib/gso/version.map
index 60aa1b54e4..73767623b9 100644
--- a/lib/gso/version.map
+++ b/lib/gso/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_gso_segment;
diff --git a/lib/hash/version.map b/lib/hash/version.map
index 9b9519745c..ce4309aa07 100644
--- a/lib/hash/version.map
+++ b/lib/hash/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_fbk_hash_create;
diff --git a/lib/ip_frag/version.map b/lib/ip_frag/version.map
index 82b308ddb0..33f231fb31 100644
--- a/lib/ip_frag/version.map
+++ b/lib/ip_frag/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_ip_frag_free_death_row;
diff --git a/lib/ipsec/version.map b/lib/ipsec/version.map
index ad3e38b7c8..ba8753eac4 100644
--- a/lib/ipsec/version.map
+++ b/lib/ipsec/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_ipsec_pkt_crypto_group;
diff --git a/lib/jobstats/version.map b/lib/jobstats/version.map
index 3e166ad548..89faa02004 100644
--- a/lib/jobstats/version.map
+++ b/lib/jobstats/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_jobstats_abort;
diff --git a/lib/kni/version.map b/lib/kni/version.map
index bb810a7f2f..cc7790651a 100644
--- a/lib/kni/version.map
+++ b/lib/kni/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_kni_alloc;
diff --git a/lib/kvargs/version.map b/lib/kvargs/version.map
index ce8a9175dd..a07166b4d2 100644
--- a/lib/kvargs/version.map
+++ b/lib/kvargs/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_kvargs_count;
diff --git a/lib/latencystats/version.map b/lib/latencystats/version.map
index 0c4360ab43..be5b014cd7 100644
--- a/lib/latencystats/version.map
+++ b/lib/latencystats/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_latencystats_get;
diff --git a/lib/lpm/version.map b/lib/lpm/version.map
index b4d437cc75..0cdd04822e 100644
--- a/lib/lpm/version.map
+++ b/lib/lpm/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_lpm6_add;
diff --git a/lib/mbuf/version.map b/lib/mbuf/version.map
index b7d98e7eb1..29654330eb 100644
--- a/lib/mbuf/version.map
+++ b/lib/mbuf/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	__rte_pktmbuf_linearize;
diff --git a/lib/member/version.map b/lib/member/version.map
index b8c6322e73..f287aabc91 100644
--- a/lib/member/version.map
+++ b/lib/member/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_member_add;
diff --git a/lib/mempool/version.map b/lib/mempool/version.map
index 50b0602952..9f77da6fff 100644
--- a/lib/mempool/version.map
+++ b/lib/mempool/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_mempool_audit;
diff --git a/lib/meter/version.map b/lib/meter/version.map
index b67f860b15..befa3b7e32 100644
--- a/lib/meter/version.map
+++ b/lib/meter/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_meter_srtcm_config;
diff --git a/lib/metrics/version.map b/lib/metrics/version.map
index 20f99cd19a..c86e405971 100644
--- a/lib/metrics/version.map
+++ b/lib/metrics/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_metrics_get_names;
diff --git a/lib/net/version.map b/lib/net/version.map
index 621f237945..355b7c25b4 100644
--- a/lib/net/version.map
+++ b/lib/net/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_eth_random_addr;
diff --git a/lib/pci/version.map b/lib/pci/version.map
index 1db19a5122..3f38303749 100644
--- a/lib/pci/version.map
+++ b/lib/pci/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_pci_addr_cmp;
diff --git a/lib/pdump/version.map b/lib/pdump/version.map
index 2f9e952d0b..f0a9d12c9a 100644
--- a/lib/pdump/version.map
+++ b/lib/pdump/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_pdump_disable;
diff --git a/lib/pipeline/version.map b/lib/pipeline/version.map
index ff0974c2ee..2b68f584a4 100644
--- a/lib/pipeline/version.map
+++ b/lib/pipeline/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_pipeline_ah_packet_drop;
diff --git a/lib/port/version.map b/lib/port/version.map
index 70922e11ee..73d0825d2e 100644
--- a/lib/port/version.map
+++ b/lib/port/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_port_ethdev_reader_ops;
diff --git a/lib/power/version.map b/lib/power/version.map
index b004e3e4a9..6ec6d5d96d 100644
--- a/lib/power/version.map
+++ b/lib/power/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_power_exit;
diff --git a/lib/rawdev/version.map b/lib/rawdev/version.map
index eb29a3ac0d..4f56870761 100644
--- a/lib/rawdev/version.map
+++ b/lib/rawdev/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_rawdev_close;
diff --git a/lib/rcu/version.map b/lib/rcu/version.map
index 82e55c6329..b63c74f856 100644
--- a/lib/rcu/version.map
+++ b/lib/rcu/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_rcu_log_type;
diff --git a/lib/reorder/version.map b/lib/reorder/version.map
index d902a7fa12..250e6664f5 100644
--- a/lib/reorder/version.map
+++ b/lib/reorder/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_reorder_create;
diff --git a/lib/ring/version.map b/lib/ring/version.map
index e35d6b9712..3377293ee4 100644
--- a/lib/ring/version.map
+++ b/lib/ring/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_ring_create;
diff --git a/lib/sched/version.map b/lib/sched/version.map
index ace284b7de..53c337b143 100644
--- a/lib/sched/version.map
+++ b/lib/sched/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_approx;
diff --git a/lib/security/version.map b/lib/security/version.map
index 22775558c8..c44c7f5f60 100644
--- a/lib/security/version.map
+++ b/lib/security/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_security_attach_session;
diff --git a/lib/stack/version.map b/lib/stack/version.map
index 8c4ca0245d..e145e32451 100644
--- a/lib/stack/version.map
+++ b/lib/stack/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_stack_create;
diff --git a/lib/table/version.map b/lib/table/version.map
index 29301480cb..65f9645d25 100644
--- a/lib/table/version.map
+++ b/lib/table/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_table_acl_ops;
diff --git a/lib/timer/version.map b/lib/timer/version.map
index 8021ccf9cf..4b782456da 100644
--- a/lib/timer/version.map
+++ b/lib/timer/version.map
@@ -1,4 +1,4 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
 	rte_timer_alt_dump_stats;
diff --git a/lib/vhost/version.map b/lib/vhost/version.map
index e2504ba657..c92a9d4962 100644
--- a/lib/vhost/version.map
+++ b/lib/vhost/version.map
@@ -1,12 +1,26 @@
-DPDK_21 {
+DPDK_22 {
 	global:
 
+	rte_vdpa_find_device_by_name;
+	rte_vdpa_get_features;
+	rte_vdpa_get_protocol_features;
+	rte_vdpa_get_queue_num;
+	rte_vdpa_get_rte_device;
+	rte_vdpa_get_stats;
+	rte_vdpa_get_stats_names;
+	rte_vdpa_register_device;
+	rte_vdpa_relay_vring_used;
+	rte_vdpa_reset_stats;
+	rte_vdpa_unregister_device;
 	rte_vhost_avail_entries;
 	rte_vhost_dequeue_burst;
+	rte_vhost_driver_attach_vdpa_device;
 	rte_vhost_driver_callback_register;
+	rte_vhost_driver_detach_vdpa_device;
 	rte_vhost_driver_disable_features;
 	rte_vhost_driver_enable_features;
 	rte_vhost_driver_get_features;
+	rte_vhost_driver_get_vdpa_device;
 	rte_vhost_driver_register;
 	rte_vhost_driver_set_features;
 	rte_vhost_driver_start;
@@ -14,37 +28,23 @@ DPDK_21 {
 	rte_vhost_enable_guest_notification;
 	rte_vhost_enqueue_burst;
 	rte_vhost_get_ifname;
+	rte_vhost_get_log_base;
 	rte_vhost_get_mem_table;
 	rte_vhost_get_mtu;
 	rte_vhost_get_negotiated_features;
 	rte_vhost_get_numa_node;
 	rte_vhost_get_queue_num;
+	rte_vhost_get_vdpa_device;
 	rte_vhost_get_vhost_vring;
+	rte_vhost_get_vring_base;
 	rte_vhost_get_vring_num;
 	rte_vhost_gpa_to_vva;
+	rte_vhost_host_notifier_ctrl;
 	rte_vhost_log_used_vring;
 	rte_vhost_log_write;
 	rte_vhost_rx_queue_count;
-	rte_vhost_vring_call;
-	rte_vhost_get_log_base;
-	rte_vhost_get_vring_base;
 	rte_vhost_set_vring_base;
-	rte_vhost_host_notifier_ctrl;
-	rte_vdpa_register_device;
-	rte_vdpa_unregister_device;
-	rte_vdpa_get_stats_names;
-	rte_vdpa_get_stats;
-	rte_vdpa_reset_stats;
-	rte_vhost_driver_attach_vdpa_device;
-	rte_vhost_driver_detach_vdpa_device;
-	rte_vhost_driver_get_vdpa_device;
-	rte_vhost_get_vdpa_device;
-	rte_vdpa_find_device_by_name;
-	rte_vdpa_get_rte_device;
-	rte_vdpa_get_queue_num;
-	rte_vdpa_get_features;
-	rte_vdpa_get_protocol_features;
-	rte_vdpa_relay_vring_used;
+	rte_vhost_vring_call;
 
 	local: *;
 };
-- 
2.31.1


^ permalink raw reply	[relevance 11%]

* Re: [dpdk-dev] [dpdk-stable] [PATCH v4] app/testpmd: fix testpmd doesn't show RSS hash offload
  2021-07-22 11:03  0%             ` Andrew Rybchenko
@ 2021-08-09  8:53  0%               ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-08-09  8:53 UTC (permalink / raw)
  To: Andrew Rybchenko, Wang, Jie1X, Li, Xiaoyun, dev; +Cc: stable

On 7/22/2021 12:03 PM, Andrew Rybchenko wrote:
> On 7/19/21 7:18 PM, Ferruh Yigit wrote:
>> On 7/19/2021 10:55 AM, Wang, Jie1X wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>>>> Sent: Friday, July 16, 2021 4:52 PM
>>>> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Wang, Jie1X <jie1x.wang@intel.com>;
>>>> dev@dpdk.org
>>>> Cc: andrew.rybchenko@oktetlabs.ru; stable@dpdk.org
>>>> Subject: Re: [dpdk-stable] [PATCH v4] app/testpmd: fix testpmd doesn't show
>>>> RSS hash offload
>>>>
>>>> On 7/16/2021 9:30 AM, Li, Xiaoyun wrote:
>>>>>> -----Original Message-----
>>>>>> From: stable <stable-bounces@dpdk.org> On Behalf Of Li, Xiaoyun
>>>>>> Sent: Thursday, July 15, 2021 12:54
>>>>>> To: Wang, Jie1X <jie1x.wang@intel.com>; dev@dpdk.org
>>>>>> Cc: andrew.rybchenko@oktetlabs.ru; stable@dpdk.org
>>>>>> Subject: Re: [dpdk-stable] [PATCH v4] app/testpmd: fix testpmd
>>>>>> doesn't show RSS hash offload
>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Wang, Jie1X <jie1x.wang@intel.com>
>>>>>>> Sent: Thursday, July 15, 2021 19:57
>>>>>>> To: dev@dpdk.org
>>>>>>> Cc: Li, Xiaoyun <xiaoyun.li@intel.com>;
>>>>>>> andrew.rybchenko@oktetlabs.ru; Wang, Jie1X <jie1x.wang@intel.com>;
>>>>>>> stable@dpdk.org
>>>>>>> Subject: [PATCH v4] app/testpmd: fix testpmd doesn't show RSS hash
>>>>>>> offload
>>>>>>>
>>>>>>> The driver may change offloads info into dev->data->dev_conf in
>>>>>>> dev_configure which may cause port->dev_conf and port->rx_conf
>>>>>>> contain
>>>>>> outdated values.
>>>>>>>
>>>>>>> This patch updates the offloads info if it changes to fix this issue.
>>>>>>>
>>>>>>> Fixes: ce8d561418d4 ("app/testpmd: add port configuration settings")
>>>>>>> Cc: stable@dpdk.org
>>>>>>>
>>>>>>> Signed-off-by: Jie Wang <jie1x.wang@intel.com>
>>>>>>> ---
>>>>>>> v4: delete the whitespace at the end of the line.
>>>>>>> v3:
>>>>>>>   - check and update the "offloads" of "port->dev_conf.rx/txmode".
>>>>>>>   - update the commit log.
>>>>>>> v2: copy "rx/txmode.offloads", instead of copying the entire struct
>>>>>>> "dev->data-
>>>>>>>> dev_conf.rx/txmode".
>>>>>>> ---
>>>>>>>   app/test-pmd/testpmd.c | 27 +++++++++++++++++++++++++++
>>>>>>>   1 file changed, 27 insertions(+)
>>>>>>
>>>>>> Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>
>>>>>
>>>>> Although I gave my ack, app shouldn't touch rte_eth_devices which this patch
>>>> does. Usually, testpmd should only call function like
>>>> eth_dev_info_get_print_err().
>>>>> But dev_info doesn't contain the info dev->data->dev_conf which the driver
>>>> modifies.
>>>>>
>>>>> Probably we need a better fix.
>>>>>
>>>>
>>>> Agree, an application accessing directly to 'rte_eth_devices' is sign of
>>>> something
>>>> missing/wrong.
>>>>
>>>> In this case there is no way for application to know what is the configured
>>>> offload settings per port and queue. Which is missing part I think.
>>>>
>>>> As you said normally we get data from PMD mainly via 'rte_eth_dev_info_get()',
>>>> which is an overloaded function, it provides many different things, like driver
>>>> default values, limitations, current config/status, capabilities etc...
>>>>
>>>> So I think we can do a few things:
>>>> 1) Add current offload configuration to 'rte_eth_dev_info_get()', so
>>>> application
>>>> can get it and use it.
>>>> The advantage is this API already called many places, many times, so there is a
>>>> big chance that application already have this information when it needs.
>>>> Disadvantage is, as mentioned above the API already big and messy, making it
>>>> bigger makes more error prone and makes easier to break ABI.
>>>>
>>> I prefer to choose the 1st suggestion.
>>>
>>> Normally PMD gets data via 'rte_eth_dev_info_get()'. When we add offloads
>>> configuration
>>> to it, we can get offloads as same as getting other info.
>>>
>>
>> Most probably it is easier to implement 1), I see your point but as said before
>> I think 'rte_eth_dev_info_get()' is already messy and I am worried to make it
>> even bigger.
> 
> IMHO, (1) is not an option.
> 
>> I prefer option 2).
> 
> I'm not sure that API function for each config parameter is an option as
> well. We should find a balance. May be I'd add something like
> rte_eth_dev_get_conf(uint16_t port_id, const struct rte_eth_conf **conf)
> which returns a pointer to up-to-date configuration. I.e. option (3).
> 

That is option 3, that can work too.

> The tricky part here is to ensure that all specific API which modifies
> various bits of the configuration updates dev_conf.
> 

They have to, aren't they? Otherwise there is no where to record the current
config for PMD too.

>>
>> @Thomas, @Andrew, what do you think?
>>
>>
>>>> 2) Add a new API to get configured offload information, so a specific API
>>>> for it.
>>>>
>>>> 3) Get a more generic API to get configured config (dev_conf) which will cover
>>>> offloads too.
>>>> Disadvantage can be leaking out too many internal config to user
>>>> unintentionally.
> 
> I don't understand it. dev_conf is provided by user on
> rte_eth_dev_configure().

Yes but application doesn't provide all config, my concern was if some internal
config should be hidden from applications (possibly via some APIs).

Overall I am OK to go with option 3, I think it can simplify the applications
life. And later we can have some more updates on testpmd to benefit from new API.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v9 1/2] devtools: script to track symbols over releases
  @ 2021-08-09 12:53  5%   ` Ray Kinsella
  2021-08-09 12:53  5%   ` [dpdk-dev] [PATCH v9 2/2] devtools: script to send notifications of expired symbols Ray Kinsella
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-09 12:53 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr

This script tracks the growth of stable and experimental symbols
over releases since v19.11. The script has the ability to
count the added symbols between two dpdk releases, and to
list experimental symbols present in two dpdk releases
(expired symbols).

example usages:

Count symbols added since v19.11
$ devtools/symbol-tool.py count-symbols

Count symbols added since v20.11
$ devtools/symbol-tool.py count-symbols --releases v20.11,v21.05

List experimental symbols present in v20.11 and v21.05
$ devtools/symbol-tool.py list-expired --releases v20.11,v21.05

List experimental symbols in libraries only, present since v19.11
$ devtools/symbol-tool.py list-expired --directory lib

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
 devtools/symbol-tool.py | 402 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 402 insertions(+)
 create mode 100755 devtools/symbol-tool.py

diff --git a/devtools/symbol-tool.py b/devtools/symbol-tool.py
new file mode 100755
index 0000000000..4a357579dc
--- /dev/null
+++ b/devtools/symbol-tool.py
@@ -0,0 +1,402 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+'''Tool to count or list symbols in each DPDK release'''
+from pathlib import Path
+import sys
+import os
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import re
+import datetime
+try:
+    from parsley import makeGrammar
+except ImportError:
+    print('This script uses the package Parsley to parse C Mapfiles.\n'
+          'This can be installed with \"pip install parsley".')
+    sys.exit()
+
+DESCRIPTION = '''
+This script tracks the growth of stable and experimental symbols
+over releases since v19.11. The script has the ability to
+count the added symbols between two dpdk releases, and to
+list experimental symbols present in two dpdk releases
+(expired symbols).
+
+example usages:
+
+Count symbols added since v19.11
+$ {s} count-symbols
+
+Count symbols added since v20.11
+$ {s} count-symbols --releases v20.11,v21.05
+
+List experimental symbols present in v20.11 and v21.05
+$ {s} list-expired --releases v20.11,v21.05
+
+List experimental symbols in libraries only, present since v19.11
+$ {s} list-expired --directory lib
+'''
+
+MAP_GRAMMAR = r"""
+
+ws = (' ' | '\r' | '\n' | '\t')*
+
+ABI_VER = ({})
+DPDK_VER = ('DPDK_' ABI_VER)
+ABI_NAME = ('INTERNAL' | 'EXPERIMENTAL' | DPDK_VER)
+comment = '#' (~'\n' anything)+ '\n'
+symbol = (~(';' | '}}' | '#') anything )+:c ';' -> ''.join(c)
+global = 'global:'
+local = 'local: *;'
+symbols = comment* symbol:s ws comment* -> s
+
+abi = (abi_section+):m -> dict(m)
+abi_section = (ws ABI_NAME:e ws '{{' ws global* (~local ws symbols)*:s ws local* ws '}}' ws DPDK_VER* ';' ws) -> (e,s)
+"""
+
+def get_abi_versions():
+    '''Returns a string of possible dpdk abi versions'''
+
+    year = datetime.date.today().year - 2000
+    tags = " |".join(['\'{}\''.format(i) \
+                     for i in reversed(range(21, year + 1)) ])
+    tags  = tags + ' | \'20.0.1\' | \'20.0\' | \'20\''
+
+    return tags
+
+def get_dpdk_releases():
+    '''Returns a list of dpdk release tags names  since v19.11'''
+
+    year = datetime.date.today().year - 2000
+    year_range = "|".join("{}".format(i) for i in range(19,year + 1))
+    pattern = re.compile(r'^\"v(' +  year_range + r')\.\d{2}\"$')
+
+    cmd = ['git', 'for-each-ref', '--sort=taggerdate', '--format', '"%(tag)"']
+    try:
+        result = subprocess.run(cmd, \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        print("Failed to interogate git for release tags")
+        sys.exit()
+
+
+    tags = result.stdout.decode('utf-8').split('\n')
+
+    # find the non-rcs between now and v19.11
+    tags = [ tag.replace('\"','') \
+             for tag in reversed(tags) \
+             if pattern.match(tag) ][:-3]
+
+    return tags
+
+def fix_directory_name(path):
+    '''Prepend librte to the source directory name'''
+    mapfilepath1 = str(path.parent.name)
+    mapfilepath2 = str(path.parents[1])
+    mapfilepath = mapfilepath2 + '/librte_' + mapfilepath1
+
+    return mapfilepath
+
+def directory_renamed(path, rel):
+    '''Fix removal of the librte_ from the directory names'''
+
+    mapfilepath = fix_directory_name(path)
+    tagfile = '{}:{}/{}'.format(rel, mapfilepath,  path.name)
+
+    try:
+        result = subprocess.run(['git', 'show', tagfile], \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        result = None
+
+    return result
+
+def mapfile_renamed(path, rel):
+    '''Fix renaming of the map file'''
+    newfile = None
+
+    result = subprocess.run(['git', 'ls-tree', \
+                             rel, str(path.parent) + '/'], \
+                            stdout=subprocess.PIPE, \
+                            stderr=subprocess.PIPE,
+                            check=True)
+    dentries = result.stdout.decode('utf-8')
+    dentries = dentries.split('\n')
+
+    # filter entries looking for the map file
+    dentries = [dentry for dentry in dentries if dentry.endswith('.map')]
+    if len(dentries) > 1 or len(dentries) == 0:
+        return None
+
+    dparts = dentries[0].split('/')
+    newfile = dparts[len(dparts) - 1]
+
+    if newfile is not None:
+        tagfile = '{}:{}/{}'.format(rel, path.parent, newfile)
+
+        try:
+            result = subprocess.run(['git', 'show', tagfile], \
+                                    stdout=subprocess.PIPE, \
+                                    stderr=subprocess.PIPE,
+                                    check=True)
+        except subprocess.CalledProcessError:
+            result = None
+
+    else:
+        result = None
+
+    return result
+
+def mapfile_and_directory_renamed(path, rel):
+    '''Fix renaming of the map file & the source directory'''
+    mapfilepath = Path("{}/{}".format(fix_directory_name(path),path.name))
+
+    return mapfile_renamed(mapfilepath, rel)
+
+FIX_STRATEGIES = [directory_renamed, \
+                  mapfile_renamed, \
+                  mapfile_and_directory_renamed]
+
+def get_symbols(map_parser, release, mapfile_path):
+    '''Count the symbols for a given release and mapfile'''
+    abi_sections = {}
+
+    tagfile = '{}:{}'.format(release,mapfile_path)
+    try:
+        result = subprocess.run(['git', 'show', tagfile], \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        result = None
+
+    for fix_strategy in FIX_STRATEGIES:
+        if result is not None:
+            break
+        result = fix_strategy(mapfile_path, release)
+
+    if result is not None:
+        mapfile = result.stdout.decode('utf-8')
+        abi_sections = map_parser(mapfile).abi()
+
+    return abi_sections
+
+def get_terminal_rows():
+    '''Find the number of rows in the terminal'''
+
+    try:
+        return os.get_terminal_size().lines
+    except IOError:
+        return 0
+
+class SymbolCountOutput():
+    '''Format the output to supported formats'''
+    output_fmt = ""
+    column_fmt = ""
+
+    def __init__(self, format_output, dpdk_releases):
+        self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+        self.column_titles = ['mapfile'] +  dpdk_releases
+
+        self.terminal_rows = get_terminal_rows()
+        self.row = 0
+
+    def set_terminal_output(self,dpdk_rel):
+        '''Set the output format to Tabbed Separated Values'''
+
+        self.output_fmt = '{:<50}' + \
+            ''.join(['{:<6}{:<6}'] * (len(dpdk_rel)))
+        self.column_fmt = '{:50}' + \
+            ''.join(['{:<12}'] * (len(dpdk_rel)))
+
+    def set_csv_output(self,dpdk_rel):
+        '''Set the output format to Comma Separated Values'''
+
+        self.output_fmt = '{},' + \
+            ','.join(['{},{}'] * (len(dpdk_rel)))
+        self.column_fmt = '{},' + \
+            ','.join(['{},'] * (len(dpdk_rel)))
+
+    def print_columns(self):
+        '''Print column rows with release names'''
+        print(self.column_fmt.format(*self.column_titles))
+        self.row += 1
+
+    def print_row(self, mapfile, symbols):
+        '''Print row of symbol values'''
+        print(self.output_fmt.format(*([mapfile] + symbols)))
+        self.row += 1
+
+        if((self.terminal_rows>0) and ((self.row % self.terminal_rows) == 0)):
+            self.print_columns()
+
+    OUTPUT_FORMATS = { None: set_terminal_output, \
+                   'terminal': set_terminal_output, \
+                   'csv': set_csv_output }
+
+class ListExpiredOutput():
+    '''Format the output to supported formats'''
+    output_fmt = ""
+    column_fmt = ""
+
+    def __init__(self, format_output, dpdk_releases):
+        self.terminal = True
+        self.OUTPUT_FORMATS[format_output](self,dpdk_releases)
+        self.column_titles = ['mapfile'] +  \
+            ['expired (' + ','.join(dpdk_releases) + ')']
+
+    def set_terminal_output(self, _):
+        '''Set the output format to Tabbed Separated Values'''
+
+        self.output_fmt = '{:<50}{:<50}'
+        self.column_fmt = '{:50}{:50}'
+
+    def set_csv_output(self, _):
+        '''Set the output format to Comma Separated Values'''
+
+        self.output_fmt = '{},{}'
+        self.column_fmt = '{},{}'
+        self.terminal = False
+
+    def print_columns(self):
+        '''Print column rows with release names'''
+        print(self.column_fmt.format(*self.column_titles))
+
+    def print_row(self, mapfile, symbols):
+        '''Print row of symbol values'''
+
+        for symbol in symbols:
+            print(self.output_fmt.format(mapfile,symbol))
+            if self.terminal :
+                mapfile = ''
+
+    OUTPUT_FORMATS = { None: set_terminal_output, \
+                   'terminal': set_terminal_output, \
+                   'csv': set_csv_output }
+
+class CountSymbolsAction:
+    ''' Logic to count symbols added since a give release '''
+    IGNORE_SECTIONS = ['EXPERIMENTAL','INTERNAL']
+
+    def __init__(self, mapfile_path, mapfile_parser, format_output):
+        self.path = mapfile_path
+        self.parser = mapfile_parser
+        self.format_output = format_output
+        self.symbols_count = []
+
+    def add_mapfile(self, release):
+        ''' add a version mapfile '''
+        symbol_count = experimental_count = 0
+
+        symbols = get_symbols(self.parser, release, self.path)
+
+        # which versions are present, and we care about
+        abi_vers = [abi_ver \
+                    for abi_ver in symbols \
+                    if abi_ver not in self.IGNORE_SECTIONS]
+
+        for abi_ver in abi_vers:
+            symbol_count += len(symbols[abi_ver])
+
+        # count experimental symbols
+        if 'EXPERIMENTAL' in symbols.keys():
+            experimental_count = len(symbols['EXPERIMENTAL'])
+
+        self.symbols_count += [symbol_count, experimental_count]
+
+    def __del__(self):
+        self.format_output.print_row(self.path.parent, self.symbols_count)
+
+class ListExpiredAction:
+    ''' Logic to list expired symbols between two releases '''
+
+    def __init__(self, mapfile_path, mapfile_parser, format_output):
+        self.path = mapfile_path
+        self.parser = mapfile_parser
+        self.format_output = format_output
+        self.experimental_symbols = []
+
+    def add_mapfile(self, release):
+        ''' add a version mapfile '''
+        symbols = get_symbols(self.parser, release, self.path)
+        if 'EXPERIMENTAL' in symbols.keys():
+            self.experimental_symbols.append(symbols['EXPERIMENTAL'])
+
+    def __del__(self):
+        if len(self.experimental_symbols) != 2:
+            return
+
+        tmp = self.experimental_symbols
+        # find symbols present in both dpdk releases
+        intersect_syms = [sym for sym in tmp[0] if sym in tmp[1]]
+
+        # check for empty set
+        if intersect_syms == []:
+            return
+
+        self.format_output.print_row(self.path.parent, intersect_syms)
+
+SRC_DIRECTORIES = 'drivers,lib'
+
+ACTIONS = {None: CountSymbolsAction, \
+           'count-symbols': CountSymbolsAction, \
+           'list-expired': ListExpiredAction}
+
+ACTION_OUTPUT = {None: SymbolCountOutput, \
+                 'count-symbols': SymbolCountOutput, \
+                 'list-expired': ListExpiredOutput}
+
+def main():
+    '''Main entry point'''
+
+    dpdk_releases = get_dpdk_releases()
+
+    parser = argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__), \
+                                     formatter_class=RawTextHelpFormatter
+                                     )
+    parser.add_argument('mode', choices=['count-symbols','list-expired'])
+    parser.add_argument('--format-output', choices=['terminal','csv'], \
+                        default='terminal')
+    parser.add_argument('--directory', choices=SRC_DIRECTORIES.split(','),
+                        default=SRC_DIRECTORIES)
+    parser.add_argument('--releases', \
+                        help='2 x comma separated release tags e.g. \'' \
+                        + ','.join([dpdk_releases[0],dpdk_releases[-1]]) \
+                        + '\'')
+    args = parser.parse_args()
+
+    if args.releases is not None:
+        dpdk_releases = args.releases.split(',')
+
+    if args.mode == 'list-expired':
+        if len(dpdk_releases) < 2:
+            sys.exit('Please specify two releases to compare ' \
+                     'in \'list-expired\' mode.')
+        dpdk_releases = [dpdk_releases[0], dpdk_releases[len(dpdk_releases) - 1]]
+
+    action = ACTIONS[args.mode]
+    format_output = ACTION_OUTPUT[args.mode](args.format_output, dpdk_releases)
+
+    map_grammar = MAP_GRAMMAR.format(get_abi_versions())
+    map_parser = makeGrammar(map_grammar, {})
+
+    format_output.print_columns()
+
+    for src_dir in args.directory.split(','):
+        for path in Path(src_dir).rglob('*.map'):
+            release_action = action(path, map_parser, format_output)
+
+            for release in dpdk_releases:
+                release_action.add_mapfile(release)
+
+            # all the magic happens in the destructor
+            del release_action
+
+if __name__ == '__main__':
+    main()
-- 
2.26.2


^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v9 2/2] devtools: script to send notifications of expired symbols
    2021-08-09 12:53  5%   ` [dpdk-dev] [PATCH v9 1/2] devtools: script to track symbols over releases Ray Kinsella
@ 2021-08-09 12:53  5%   ` Ray Kinsella
  1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-09 12:53 UTC (permalink / raw)
  To: dev; +Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr

Use this script with the output of the DPDK symbol tool, to notify
maintainers of expired symbols by email. You need to define the environment
variable DPDK_GETMAINTAINER_PATH for this tool to work.

Use terminal output to review the emails before sending.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output terminal

Then use email output to send the emails to the maintainers.
e.g.
$ devtools/symbol-tool.py list-expired --format-output csv \
| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \
devtools/notify_expired_symbols.py --format-output email \
--smtp-server <server> --sender <someone@somewhere.com> \
--password <password>

Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
 devtools/notify-symbol-maintainers.py | 234 ++++++++++++++++++++++++++
 1 file changed, 234 insertions(+)
 create mode 100755 devtools/notify-symbol-maintainers.py

diff --git a/devtools/notify-symbol-maintainers.py b/devtools/notify-symbol-maintainers.py
new file mode 100755
index 0000000000..a6c27b067c
--- /dev/null
+++ b/devtools/notify-symbol-maintainers.py
@@ -0,0 +1,234 @@
+#!/usr/bin/env python3
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2021 Intel Corporation
+'''Tool to notify maintainers of expired symbols'''
+import smtplib
+import ssl
+import sys
+import subprocess
+import argparse
+from argparse import RawTextHelpFormatter
+import time
+from email.message import EmailMessage
+
+DESCRIPTION = '''
+Use this script with the output of the DPDK symbol tool, to notify maintainers
+of expired symbols by email. You need to define the environment variable
+DPDK_GETMAINTAINER_PATH, for this tool to work.
+
+Use terminal output to review the emails before sending.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output terminal
+
+Then use email output to send the emails to the maintainers.
+e.g.
+$ devtools/symbol-tool.py list-expired --format-output csv \\
+| DPDK_GETMAINTAINER_PATH=<somewhere>/get_maintainer.pl \\
+{s} --format-output email \\
+--smtp-server <server> --sender <someone@somewhere.com> --password <password>
+'''
+
+EMAIL_TEMPLATE = '''Hi there,
+
+Please note the symbols listed below have expired. In line with the DPDK ABI
+policy, they should be scheduled for removal, in the next DPDK release.
+
+For more information, please see the DPDK ABI Policy, section 3.5.3.
+https://doc.dpdk.org/guides/contributing/abi_policy.html
+
+Thanks,
+
+The DPDK Symbol Bot
+
+'''
+
+ABI_POLICY = 'doc/guides/contributing/abi_policy.rst'
+get_maintainer = ['devtools/get-maintainer.sh', \
+                  '--email', '-f']
+
+def _get_maintainers(libpath):
+    '''Get the maintainers for given library'''
+    try:
+        cmd = get_maintainer + [libpath]
+        result = subprocess.run(cmd, \
+                                stdout=subprocess.PIPE, \
+                                stderr=subprocess.PIPE,
+                                check=True)
+    except subprocess.CalledProcessError:
+        return None
+
+    if result is None:
+        return None
+
+    email = result.stdout.decode('utf-8')
+    if email == '':
+        return None
+
+    email = list(filter(None,email.split('\n')))
+    return email
+
+default_maintainers = _get_maintainers(ABI_POLICY)
+
+def get_maintainers(libpath):
+    '''Get the maintainers for given library'''
+    maintainers=_get_maintainers(libpath)
+
+    if maintainers is None:
+        maintainers = default_maintainers
+
+    return maintainers
+
+def get_message(library, symbols):
+    '''Build email message from symbols, config and maintainers'''
+    message = {}
+    maintainers = get_maintainers(library)
+
+    message['To'] = maintainers
+    if maintainers != default_maintainers:
+        message['CC'] = default_maintainers
+
+    message['Subject'] = 'Expired symbols in {}\n'.format(library)
+
+    body = EMAIL_TEMPLATE
+    for sym in symbols:
+        body += ('{}\n'.format(sym))
+
+    message['Body'] = body
+
+    return message
+
+class OutputEmail():
+    '''Format the output for email'''
+    def __init__(self, config):
+        self.config = config
+
+        self.terminal = OutputTerminal(config)
+        context = ssl.create_default_context()
+
+        # Try to log in to server and send email
+        try:
+            self.server = smtplib.SMTP(config['smtp_server'], 587)
+            self.server.starttls(context=context) # Secure the connection
+            self.server.login(config['sender'], config['password'])
+        except Exception as exception:
+            print(exception)
+            raise exception
+
+    def message(self,message):
+        '''send email'''
+        self.terminal.message(message)
+
+        msg = EmailMessage()
+        msg.set_content(message.pop('Body'))
+
+        for key in message.keys():
+            msg[key] = message[key]
+
+        msg['From'] = self.config['sender']
+        msg['Reply-To'] = 'no-reply@dpdk.org'
+
+        self.server.send_message(msg)
+
+        time.sleep(1)
+
+    def __del__(self):
+        self.server.quit()
+
+class OutputTerminal(): # pylint: disable=too-few-public-methods
+    '''Format the output for the terminal'''
+    def __init__(self, config):
+        self.config = config
+
+    def message(self,message):
+        '''Print email to terminal'''
+        terminal = 'To:' + ', '.join(message['To']) + '\n'
+        if 'sender' in self.config.keys():
+            terminal += 'From:' + self.config['sender'] + '\n'
+
+        terminal += 'Reply-To:' + 'no-reply@dpdk.org' + '\n'
+        if 'CC' in message.keys():
+            terminal += 'CC:' + ', '.join(message['CC']) + '\n'
+
+        terminal += 'Subject:' + message['Subject'] + '\n'
+        terminal += 'Body:' + message['Body'] + '\n'
+
+        print(terminal)
+        print('-' * 80)
+
+def parse_config(args):
+    '''put the command line args in the right places'''
+    config = {}
+    error_msg = None
+
+    outputs = {
+        None : OutputTerminal,
+        'terminal' : OutputTerminal,
+        'email' : OutputEmail
+    }
+
+    if args.format_output == 'email':
+        if args.smtp_server is None:
+            error_msg = 'SMTP server'
+        else:
+            config['smtp_server'] = args.smtp_server
+
+        if args.sender is None:
+            error_msg = 'sender'
+        else:
+            config['sender'] = args.sender
+
+        if args.password is None:
+            error_msg = 'password'
+        else:
+            config['password'] = args.password
+
+    if error_msg is not None:
+        print('Please specify a {} for email output'.format(error_msg))
+        return None
+
+    config['output'] = outputs[args.format_output]
+    return config
+
+def main():
+    '''Main entry point'''
+    parser = argparse.ArgumentParser(description=DESCRIPTION.format(s=__file__), \
+                                     formatter_class=RawTextHelpFormatter)
+    parser.add_argument('--format-output', choices=['terminal','email'], \
+                        default='terminal')
+    parser.add_argument('--smtp-server')
+    parser.add_argument('--password')
+    parser.add_argument('--sender')
+
+    args = parser.parse_args()
+    config = parse_config(args)
+    if config is None:
+        return
+
+    symbols = []
+    lastlib = library = ''
+
+    output = config['output'](config)
+
+    for line in sys.stdin:
+        line = line.rstrip('\n')
+        library, symbol = [line[:line.find(',')], \
+                           line[line.find(',') + 1: len(line)]]
+        if library == 'mapfile':
+            continue
+
+        if library != lastlib:
+            message = get_message(lastlib, symbols)
+            output.message(message)
+            symbols = []
+
+        lastlib = library
+        symbols = symbols + [symbol]
+
+    #print the last library
+    message = get_message(lastlib, symbols)
+    output.message(message)
+
+if __name__ == '__main__':
+    main()
-- 
2.26.2


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [RFC] ethdev: change queue release callback
  @ 2021-08-09 15:31  4%     ` Ferruh Yigit
  2021-08-10  8:03  3%       ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-09 15:31 UTC (permalink / raw)
  To: Singh, Aman Deep, Andrew Rybchenko, Xueming Li
  Cc: dev, Viacheslav Ovsiienko, Thomas Monjalon

On 8/9/2021 3:39 PM, Singh, Aman Deep wrote:
> Hi Xueming,
> 
> On 7/28/2021 1:10 PM, Andrew Rybchenko wrote:
>> On 7/27/21 6:41 AM, Xueming Li wrote:
>>> To align with other eth device queue configuration callbacks, change RX
>>> and TX queue release callback API parameter from queue object to device
>>> and queue index.
>>>
>>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
>>
>> In fact, there is no strong reasons to do it, but I think it is a nice
>> cleanup to use (dev + queue index) on control path.
>>
>> Hopefully it will not result in any regressions.
> 
> Combined there are 100+ API's for Rx/Tx queue_release that need to be modified
> for it.
> 
> I believe all regression possibilities here will be caught, in compilation phase
> itself.
> 

Same here, it is a good cleanup but there is no strong reason for it.

Since it is all internal, there is no ABI restriction on the patch, and v21.11
will be full ABI break patches, to not cause conflicts with this change, what
would you think to have it on v22.02?

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [RFC] ethdev: change queue release callback
  2021-08-09 15:31  4%     ` Ferruh Yigit
@ 2021-08-10  8:03  3%       ` Xueming(Steven) Li
  2021-08-10  8:54  0%         ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-08-10  8:03 UTC (permalink / raw)
  To: Ferruh Yigit, Singh, Aman Deep, Andrew Rybchenko
  Cc: dev, Slava Ovsiienko, NBU-Contact-Thomas Monjalon

Hi Singh and Ferruh,

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Monday, August 9, 2021 11:31 PM
> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; Xueming(Steven) Li
> <xuemingl@nvidia.com>
> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> 
> On 8/9/2021 3:39 PM, Singh, Aman Deep wrote:
> > Hi Xueming,
> >
> > On 7/28/2021 1:10 PM, Andrew Rybchenko wrote:
> >> On 7/27/21 6:41 AM, Xueming Li wrote:
> >>> To align with other eth device queue configuration callbacks, change
> >>> RX and TX queue release callback API parameter from queue object to
> >>> device and queue index.
> >>>
> >>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> >>
> >> In fact, there is no strong reasons to do it, but I think it is a
> >> nice cleanup to use (dev + queue index) on control path.
> >>
> >> Hopefully it will not result in any regressions.
> >
> > Combined there are 100+ API's for Rx/Tx queue_release that need to be
> > modified for it.
> >
> > I believe all regression possibilities here will be caught, in
> > compilation phase itself.
> >
> 
> Same here, it is a good cleanup but there is no strong reason for it.
> 
> Since it is all internal, there is no ABI restriction on the patch, and v21.11 will be full ABI break patches, to not cause conflicts with this
> change, what would you think to have it on v22.02?

This patch is required by shared-rxq feature which ABI broken, target to 21.11.
I'll do it carefully, fortunately, the change is straightforward.


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC] ethdev: change queue release callback
  2021-08-10  8:03  3%       ` Xueming(Steven) Li
@ 2021-08-10  8:54  0%         ` Ferruh Yigit
  2021-08-10  9:07  0%           ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-10  8:54 UTC (permalink / raw)
  To: Xueming(Steven) Li, Singh, Aman Deep, Andrew Rybchenko
  Cc: dev, Slava Ovsiienko, NBU-Contact-Thomas Monjalon

On 8/10/2021 9:03 AM, Xueming(Steven) Li wrote:
> Hi Singh and Ferruh,
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Monday, August 9, 2021 11:31 PM
>> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; Xueming(Steven) Li
>> <xuemingl@nvidia.com>
>> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
>> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
>>
>> On 8/9/2021 3:39 PM, Singh, Aman Deep wrote:
>>> Hi Xueming,
>>>
>>> On 7/28/2021 1:10 PM, Andrew Rybchenko wrote:
>>>> On 7/27/21 6:41 AM, Xueming Li wrote:
>>>>> To align with other eth device queue configuration callbacks, change
>>>>> RX and TX queue release callback API parameter from queue object to
>>>>> device and queue index.
>>>>>
>>>>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
>>>>
>>>> In fact, there is no strong reasons to do it, but I think it is a
>>>> nice cleanup to use (dev + queue index) on control path.
>>>>
>>>> Hopefully it will not result in any regressions.
>>>
>>> Combined there are 100+ API's for Rx/Tx queue_release that need to be
>>> modified for it.
>>>
>>> I believe all regression possibilities here will be caught, in
>>> compilation phase itself.
>>>
>>
>> Same here, it is a good cleanup but there is no strong reason for it.
>>
>> Since it is all internal, there is no ABI restriction on the patch, and v21.11 will be full ABI break patches, to not cause conflicts with this
>> change, what would you think to have it on v22.02?
> 
> This patch is required by shared-rxq feature which ABI broken, target to 21.11.

Why it is required?

> I'll do it carefully, fortunately, the change is straightforward.
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] ethdev: change queue release callback
  2021-08-10  8:54  0%         ` Ferruh Yigit
@ 2021-08-10  9:07  0%           ` Xueming(Steven) Li
  2021-08-11 11:57  0%             ` Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-08-10  9:07 UTC (permalink / raw)
  To: Ferruh Yigit, Singh, Aman Deep, Andrew Rybchenko
  Cc: dev, Slava Ovsiienko, NBU-Contact-Thomas Monjalon



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Tuesday, August 10, 2021 4:54 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> 
> On 8/10/2021 9:03 AM, Xueming(Steven) Li wrote:
> > Hi Singh and Ferruh,
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Sent: Monday, August 9, 2021 11:31 PM
> >> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko
> >> <andrew.rybchenko@oktetlabs.ru>; Xueming(Steven) Li
> >> <xuemingl@nvidia.com>
> >> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>;
> >> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> >> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> >>
> >> On 8/9/2021 3:39 PM, Singh, Aman Deep wrote:
> >>> Hi Xueming,
> >>>
> >>> On 7/28/2021 1:10 PM, Andrew Rybchenko wrote:
> >>>> On 7/27/21 6:41 AM, Xueming Li wrote:
> >>>>> To align with other eth device queue configuration callbacks,
> >>>>> change RX and TX queue release callback API parameter from queue
> >>>>> object to device and queue index.
> >>>>>
> >>>>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> >>>>
> >>>> In fact, there is no strong reasons to do it, but I think it is a
> >>>> nice cleanup to use (dev + queue index) on control path.
> >>>>
> >>>> Hopefully it will not result in any regressions.
> >>>
> >>> Combined there are 100+ API's for Rx/Tx queue_release that need to
> >>> be modified for it.
> >>>
> >>> I believe all regression possibilities here will be caught, in
> >>> compilation phase itself.
> >>>
> >>
> >> Same here, it is a good cleanup but there is no strong reason for it.
> >>
> >> Since it is all internal, there is no ABI restriction on the patch,
> >> and v21.11 will be full ABI break patches, to not cause conflicts with this change, what would you think to have it on v22.02?
> >
> > This patch is required by shared-rxq feature which ABI broken, target to 21.11.
> 
> Why it is required?

In rx burst function, rxq object is used in data path. For best data performance, it's shared-rxq object in case of shared rxq enabled.
I think eth api defined rxq object for performance as well, specific on data plane. 
Hardware saves port info received packet descriptor for my case.
Can't tell which device's queue with this shared rxq object, control path can't use this shared rxq anymore, have to be specific on dev and queue id.

> 
> > I'll do it carefully, fortunately, the change is straightforward.
> >


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [Bug 788] i40e: 16BYTE_RX_DESC build broken on FreeBSD-13
@ 2021-08-10 18:27  5% bugzilla
  0 siblings, 0 replies; 200+ results
From: bugzilla @ 2021-08-10 18:27 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=788

            Bug ID: 788
           Summary: i40e: 16BYTE_RX_DESC build broken on FreeBSD-13
           Product: DPDK
           Version: 21.08
          Hardware: x86
                OS: FreeBSD
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: ethdev
          Assignee: dev@dpdk.org
          Reporter: brian90013@gmail.com
  Target Milestone: ---

Hello,

I just tried compiling DPDK 21.08 and found my configuration no longer builds
on FreeBSD-13.0. With version 21.05, I defined RTE_LIBRTE_I40E_16BYTE_RX_DESC
in rte_config.h as described in section "Use 16 Bytes RX Descriptor Size" of
the current i40e PMD documentation. I also defined a similar variable
RTE_LIBRTE_ICE_16BYTE_RX_DESC in rte_config.h for the ice PMD.

This morning I brought in version 21.08 and watched it compile on FreeBSD-12.2
(clang version 10.0.1) running on an 'Intel(R) Xeon(R) CPU E5-2637 v3'. Then I
tried building it on FreeBSD-13.0 (clang version 11.0.1) on a 'AMD Ryzen
Threadripper 3990X 64-Core Processor' but the build died with a number of
compilation errors related to avx512f features enabled in functions compiled
without support for avx512f.

Below I have an edited build log from the FreeBSD-12.2 system that works
followed by the log from the FreeBSD-13.0 system that fails. Looking at the
12.2 log, there is a warning “Binutils error with AVX512 assembly, disabling
AVX512 support” that might be hiding this issue? Neither system has hardware
support for AVX-512 but it appears that the compiler does. Thank you for your
help!



*** FreeBSD-12.2 build that works ***
The Meson build system
Version: 0.58.1
Build type: native build
Program cat found: YES (/bin/cat)
Project name: DPDK
Project version: 21.08.0
C compiler for the host machine: cc (clang 10.0.1 "FreeBSD clang version 10.0.1
(git@github.com:llvm/llvm-project.git llvmorg-10.0.1-0-gef32c611aa2)")
C linker for the host machine: cc ld.lld 10.0.1
Host machine cpu family: x86_64
Host machine cpu: x86_64

Compiler for C supports arguments -mno-avx512f: YES 
config/x86/meson.build:9: WARNING: Binutils error with AVX512 assembly,
disabling AVX512 support
Compiler for C supports arguments -mavx512f: YES 
Checking if "AVX512 checking" compiles: YES 
Fetching value of define "__SSE4_2__" : 1 
Fetching value of define "__AES__" : 1 
Fetching value of define "__AVX__" : 1 
Fetching value of define "__AVX2__" : 1 
Fetching value of define "__AVX512BW__" :  
Fetching value of define "__AVX512CD__" :  
Fetching value of define "__AVX512DQ__" :  
Fetching value of define "__AVX512F__" :  
Fetching value of define "__AVX512VL__" :  
Fetching value of define "__PCLMUL__" : 1 
Fetching value of define "__RDRND__" : 1 
Fetching value of define "__RDSEED__" :  
Fetching value of define "__VPCLMULQDQ__" :  

Compiler for C supports arguments -mpclmul: YES 
Compiler for C supports arguments -maes: YES 



*** FreeBSD-13.0 system that does not build ***
The Meson build system
Version: 0.58.1
Build type: native build
Program cat found: YES (/bin/cat)
Project name: DPDK
Project version: 21.08.0
C compiler for the host machine: cc (clang 11.0.1 "FreeBSD clang version 11.0.1
(git@github.com:llvm/llvm-project.git llvmorg-11.0.1-0-g43ff75f2c3fe)")
C linker for the host machine: cc ld.lld 11.0.1
Host machine cpu family: x86_64
Host machine cpu: x86_64

Compiler for C supports arguments -mavx512f: YES 
Checking if "AVX512 checking" compiles: YES 
Fetching value of define "__SSE4_2__" : 1 
Fetching value of define "__AES__" : 1 
Fetching value of define "__AVX__" : 1 
Fetching value of define "__AVX2__" : 1 
Fetching value of define "__AVX512BW__" :  
Fetching value of define "__AVX512CD__" :  
Fetching value of define "__AVX512DQ__" :  
Fetching value of define "__AVX512F__" :  
Fetching value of define "__AVX512VL__" :  
Fetching value of define "__PCLMUL__" : 1 
Fetching value of define "__RDRND__" : 1 
Fetching value of define "__RDSEED__" : 1 
Fetching value of define "__VPCLMULQDQ__" :  

Compiler for C supports arguments -mpclmul: YES 
Compiler for C supports arguments -maes: YES 
Compiler for C supports arguments -mavx512f: YES (cached)
Compiler for C supports arguments -mavx512bw: YES 
Compiler for C supports arguments -mavx512dq: YES 
Compiler for C supports arguments -mavx512vl: YES 
Compiler for C supports arguments -mvpclmulqdq: YES 
Compiler for C supports arguments -mavx2: YES 
Compiler for C supports arguments -mavx: YES 

Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw:
YES 
Compiler for C supports arguments -mavx512f -mavx512dq: YES 



FAILED: drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 
cc -Idrivers/libtmp_rte_net_i40e.a.p -Idrivers -I../drivers -Idrivers/net/i40e
-I../drivers/net/i40e -Idrivers/net/i40e/base -I../drivers/net/i40e/base
-Ilib/ethdev -I../lib/ethdev -I. -I.. -Iconfig -I../config -Ilib/eal/include
-I../lib/eal/include -Ilib/eal/freebsd/include -I../lib/eal/freebsd/include
-Ilib/eal/x86/include -I../lib/eal/x86/include -Ilib/eal/common
-I../lib/eal/common -Ilib/eal -I../lib/eal -Ilib/kvargs -I../lib/kvargs
-Ilib/metrics -I../lib/metrics -Ilib/telemetry -I../lib/telemetry -Ilib/net
-I../lib/net -Ilib/mbuf -I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring
-I../lib/ring -Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci
-I../drivers/bus/pci/bsd -Ilib/pci -I../lib/pci -Idrivers/bus/vdev
-I../drivers/bus/vdev -Ilib/hash -I../lib/hash -Ilib/rcu -I../lib/rcu
-fcolor-diagnostics -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -O3 -include
rte_config.h -Wextra -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral
-Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs
-Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes
-Wundef -Wwrite-strings -Wno-address-of-packed-member
-Wno-missing-field-initializers -D_GNU_SOURCE -D__BSD_VISIBLE -fPIC
-march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -DPF_DRIVER
-DVF_DRIVER -DINTEGRATED_VF -DX722_A0_SUPPORT -DCC_AVX2_SUPPORT
-DCC_AVX512_SUPPORT -DRTE_LOG_DEFAULT_LOGTYPE=pmd.net.i40e -MD -MQ
drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o -MF
drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o.d -o
drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o -c
../drivers/net/i40e/i40e_rxtx_vec_avx2.c
In file included from ../drivers/net/i40e/i40e_rxtx_vec_avx2.c:13:
../drivers/net/i40e/i40e_rxtx_vec_common.h:337:22: error: always_inline
function '_mm512_set1_epi64' requires target feature 'avx512f', but would be
inlined into function 'i40e_rxq_rearm_common' that is compiled without support
for 'avx512f'
                __m512i hdr_room = _mm512_set1_epi64(RTE_PKTMBUF_HEADROOM);
                                   ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:337:22: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:385:5: error:
'__builtin_ia32_inserti64x4' needs target feature avx512f
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr0_1),
                                ^
/usr/lib/clang/11.0.1/include/avx512fintrin.h:7413:12: note: expanded from
macro '_mm512_inserti64x4'
  (__m512i)__builtin_ia32_inserti64x4((__v8di)(__m512i)(A), \
           ^
In file included from ../drivers/net/i40e/i40e_rxtx_vec_avx2.c:13:
../drivers/net/i40e/i40e_rxtx_vec_common.h:385:24: error: always_inline
function '_mm512_castsi256_si512' requires target feature 'avx512f', but would
be inlined into function 'i40e_rxq_rearm_common' that is compiled without
support for 'avx512f'
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr0_1),
                                                   ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:385:24: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:388:5: error:
'__builtin_ia32_inserti64x4' needs target feature avx512f
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr4_5),
                                ^
/usr/lib/clang/11.0.1/include/avx512fintrin.h:7413:12: note: expanded from
macro '_mm512_inserti64x4'
  (__m512i)__builtin_ia32_inserti64x4((__v8di)(__m512i)(A), \
           ^
In file included from ../drivers/net/i40e/i40e_rxtx_vec_avx2.c:13:
../drivers/net/i40e/i40e_rxtx_vec_common.h:388:24: error: always_inline
function '_mm512_castsi256_si512' requires target feature 'avx512f', but would
be inlined into function 'i40e_rxq_rearm_common' that is compiled without
support for 'avx512f'
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr4_5),
                                                   ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:388:24: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:392:18: error: always_inline
function '_mm512_unpackhi_epi64' requires target feature 'avx512f', but would
be inlined into function 'i40e_rxq_rearm_common' that is compiled without
support for 'avx512f'
                        dma_addr0_3 = _mm512_unpackhi_epi64(vaddr0_3,
vaddr0_3);
                                      ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:392:18: error: AVX vector argument
of type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:393:18: error: always_inline
function '_mm512_unpackhi_epi64' requires target feature 'avx512f', but would
be inlined into function 'i40e_rxq_rearm_common' that is compiled without
support for 'avx512f'
                        dma_addr4_7 = _mm512_unpackhi_epi64(vaddr4_7,
vaddr4_7);
                                      ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:393:18: error: AVX vector argument
of type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:396:18: error: always_inline
function '_mm512_add_epi64' requires target feature 'avx512f', but would be
inlined into function 'i40e_rxq_rearm_common' that is compiled without support
for 'avx512f'
                        dma_addr0_3 = _mm512_add_epi64(dma_addr0_3, hdr_room);
                                      ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:396:18: error: AVX vector argument
of type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:397:18: error: always_inline
function '_mm512_add_epi64' requires target feature 'avx512f', but would be
inlined into function 'i40e_rxq_rearm_common' that is compiled without support
for 'avx512f'
                        dma_addr4_7 = _mm512_add_epi64(dma_addr4_7, hdr_room);
                                      ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:397:18: error: AVX vector argument
of type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:400:4: error: always_inline function
'_mm512_store_si512' requires target feature 'avx512f', but would be inlined
into function 'i40e_rxq_rearm_common' that is compiled without support for
'avx512f'
                        _mm512_store_si512((__m512i *)&rxdp->read,
dma_addr0_3);
                        ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:400:4: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:401:4: error: always_inline function
'_mm512_store_si512' requires target feature 'avx512f', but would be inlined
into function 'i40e_rxq_rearm_common' that is compiled without support for
'avx512f'
                        _mm512_store_si512((__m512i *)&(rxdp + 4)->read,
dma_addr4_7);
                        ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
[971/1893] Compiling C object
drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o
FAILED: drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o 
cc -Idrivers/libtmp_rte_net_ice.a.p -Idrivers -I../drivers -Idrivers/net/ice
-I../drivers/net/ice -Idrivers/net/ice/base -I../drivers/net/ice/base
-Idrivers/common/iavf -I../drivers/common/iavf -Ilib/ethdev -I../lib/ethdev -I.
-I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include
-Ilib/eal/freebsd/include -I../lib/eal/freebsd/include -Ilib/eal/x86/include
-I../lib/eal/x86/include -Ilib/eal/common -I../lib/eal/common -Ilib/eal
-I../lib/eal -Ilib/kvargs -I../lib/kvargs -Ilib/metrics -I../lib/metrics
-Ilib/telemetry -I../lib/telemetry -Ilib/net -I../lib/net -Ilib/mbuf
-I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring
-Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci
-I../drivers/bus/pci/bsd -Ilib/pci -I../lib/pci -Idrivers/bus/vdev
-I../drivers/bus/vdev -Ilib/hash -I../lib/hash -Ilib/rcu -I../lib/rcu
-fcolor-diagnostics -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -O3 -include
rte_config.h -Wextra -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral
-Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs
-Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes
-Wundef -Wwrite-strings -Wno-address-of-packed-member
-Wno-missing-field-initializers -D_GNU_SOURCE -D__BSD_VISIBLE -fPIC
-march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -DCC_AVX2_SUPPORT
-DCC_AVX512_SUPPORT -DRTE_LOG_DEFAULT_LOGTYPE=pmd.net.ice -MD -MQ
drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o -MF
drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o.d -o
drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o -c
../drivers/net/ice/ice_rxtx_vec_avx2.c
In file included from ../drivers/net/ice/ice_rxtx_vec_avx2.c:5:
../drivers/net/ice/ice_rxtx_vec_common.h:422:22: error: always_inline function
'_mm512_set1_epi64' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                __m512i hdr_room = _mm512_set1_epi64(RTE_PKTMBUF_HEADROOM);
                                   ^
../drivers/net/ice/ice_rxtx_vec_common.h:422:22: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:470:5: error:
'__builtin_ia32_inserti64x4' needs target feature avx512f
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr0_1),
                                ^
/usr/lib/clang/11.0.1/include/avx512fintrin.h:7413:12: note: expanded from
macro '_mm512_inserti64x4'
  (__m512i)__builtin_ia32_inserti64x4((__v8di)(__m512i)(A), \
           ^
In file included from ../drivers/net/ice/ice_rxtx_vec_avx2.c:5:
../drivers/net/ice/ice_rxtx_vec_common.h:470:24: error: always_inline function
'_mm512_castsi256_si512' requires target feature 'avx512f', but would be
inlined into function 'ice_rxq_rearm_common' that is compiled without support
for 'avx512f'
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr0_1),
                                                   ^
../drivers/net/ice/ice_rxtx_vec_common.h:470:24: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:473:5: error:
'__builtin_ia32_inserti64x4' needs target feature avx512f
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr4_5),
                                ^
/usr/lib/clang/11.0.1/include/avx512fintrin.h:7413:12: note: expanded from
macro '_mm512_inserti64x4'
  (__m512i)__builtin_ia32_inserti64x4((__v8di)(__m512i)(A), \
           ^
In file included from ../drivers/net/ice/ice_rxtx_vec_avx2.c:5:
../drivers/net/ice/ice_rxtx_vec_common.h:473:24: error: always_inline function
'_mm512_castsi256_si512' requires target feature 'avx512f', but would be
inlined into function 'ice_rxq_rearm_common' that is compiled without support
for 'avx512f'
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr4_5),
                                                   ^
../drivers/net/ice/ice_rxtx_vec_common.h:473:24: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:477:18: error: always_inline function
'_mm512_unpackhi_epi64' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                        dma_addr0_3 = _mm512_unpackhi_epi64(vaddr0_3,
vaddr0_3);
                                      ^
../drivers/net/ice/ice_rxtx_vec_common.h:477:18: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:478:18: error: always_inline function
'_mm512_unpackhi_epi64' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                        dma_addr4_7 = _mm512_unpackhi_epi64(vaddr4_7,
vaddr4_7);
                                      ^
../drivers/net/ice/ice_rxtx_vec_common.h:478:18: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:481:18: error: always_inline function
'_mm512_add_epi64' requires target feature 'avx512f', but would be inlined into
function 'ice_rxq_rearm_common' that is compiled without support for 'avx512f'
                        dma_addr0_3 = _mm512_add_epi64(dma_addr0_3, hdr_room);
                                      ^
../drivers/net/ice/ice_rxtx_vec_common.h:481:18: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:482:18: error: always_inline function
'_mm512_add_epi64' requires target feature 'avx512f', but would be inlined into
function 'ice_rxq_rearm_common' that is compiled without support for 'avx512f'
                        dma_addr4_7 = _mm512_add_epi64(dma_addr4_7, hdr_room);
                                      ^
../drivers/net/ice/ice_rxtx_vec_common.h:482:18: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:485:4: error: always_inline function
'_mm512_store_si512' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                        _mm512_store_si512((__m512i *)&rxdp->read,
dma_addr0_3);
                        ^
../drivers/net/ice/ice_rxtx_vec_common.h:485:4: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:486:4: error: always_inline function
'_mm512_store_si512' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                        _mm512_store_si512((__m512i *)&(rxdp + 4)->read,
dma_addr4_7);
                        ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
[998/1893] Compiling C object
drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx.c.o
../drivers/net/ice/ice_rxtx.c:129:60: warning: unused parameter 'rxq'
[-Wunused-parameter]
ice_rxd_to_pkt_fields_by_comms_aux_v1(struct ice_rx_queue *rxq,
                                                           ^
../drivers/net/ice/ice_rxtx.c:171:60: warning: unused parameter 'rxq'
[-Wunused-parameter]
ice_rxd_to_pkt_fields_by_comms_aux_v2(struct ice_rx_queue *rxq,
                                                           ^
2 warnings generated.
[1006/1893] Compiling C object
lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o
ninja: build stopped: subcommand failed.
[109/890] Compiling C object
drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o
FAILED: drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 
cc -Idrivers/libtmp_rte_net_i40e.a.p -Idrivers -I../drivers -Idrivers/net/i40e
-I../drivers/net/i40e -Idrivers/net/i40e/base -I../drivers/net/i40e/base
-Ilib/ethdev -I../lib/ethdev -I. -I.. -Iconfig -I../config -Ilib/eal/include
-I../lib/eal/include -Ilib/eal/freebsd/include -I../lib/eal/freebsd/include
-Ilib/eal/x86/include -I../lib/eal/x86/include -Ilib/eal/common
-I../lib/eal/common -Ilib/eal -I../lib/eal -Ilib/kvargs -I../lib/kvargs
-Ilib/metrics -I../lib/metrics -Ilib/telemetry -I../lib/telemetry -Ilib/net
-I../lib/net -Ilib/mbuf -I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring
-I../lib/ring -Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci
-I../drivers/bus/pci/bsd -Ilib/pci -I../lib/pci -Idrivers/bus/vdev
-I../drivers/bus/vdev -Ilib/hash -I../lib/hash -Ilib/rcu -I../lib/rcu
-fcolor-diagnostics -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -O3 -include
rte_config.h -Wextra -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral
-Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs
-Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes
-Wundef -Wwrite-strings -Wno-address-of-packed-member
-Wno-missing-field-initializers -D_GNU_SOURCE -D__BSD_VISIBLE -fPIC
-march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -DPF_DRIVER
-DVF_DRIVER -DINTEGRATED_VF -DX722_A0_SUPPORT -DCC_AVX2_SUPPORT
-DCC_AVX512_SUPPORT -DRTE_LOG_DEFAULT_LOGTYPE=pmd.net.i40e -MD -MQ
drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o -MF
drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o.d -o
drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o -c
../drivers/net/i40e/i40e_rxtx_vec_avx2.c
In file included from ../drivers/net/i40e/i40e_rxtx_vec_avx2.c:13:
../drivers/net/i40e/i40e_rxtx_vec_common.h:337:22: error: always_inline
function '_mm512_set1_epi64' requires target feature 'avx512f', but would be
inlined into function 'i40e_rxq_rearm_common' that is compiled without support
for 'avx512f'
                __m512i hdr_room = _mm512_set1_epi64(RTE_PKTMBUF_HEADROOM);
                                   ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:337:22: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:385:5: error:
'__builtin_ia32_inserti64x4' needs target feature avx512f
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr0_1),
                                ^
/usr/lib/clang/11.0.1/include/avx512fintrin.h:7413:12: note: expanded from
macro '_mm512_inserti64x4'
  (__m512i)__builtin_ia32_inserti64x4((__v8di)(__m512i)(A), \
           ^
In file included from ../drivers/net/i40e/i40e_rxtx_vec_avx2.c:13:
../drivers/net/i40e/i40e_rxtx_vec_common.h:385:24: error: always_inline
function '_mm512_castsi256_si512' requires target feature 'avx512f', but would
be inlined into function 'i40e_rxq_rearm_common' that is compiled without
support for 'avx512f'
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr0_1),
                                                   ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:385:24: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:388:5: error:
'__builtin_ia32_inserti64x4' needs target feature avx512f
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr4_5),
                                ^
/usr/lib/clang/11.0.1/include/avx512fintrin.h:7413:12: note: expanded from
macro '_mm512_inserti64x4'
  (__m512i)__builtin_ia32_inserti64x4((__v8di)(__m512i)(A), \
           ^
In file included from ../drivers/net/i40e/i40e_rxtx_vec_avx2.c:13:
../drivers/net/i40e/i40e_rxtx_vec_common.h:388:24: error: always_inline
function '_mm512_castsi256_si512' requires target feature 'avx512f', but would
be inlined into function 'i40e_rxq_rearm_common' that is compiled without
support for 'avx512f'
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr4_5),
                                                   ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:388:24: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:392:18: error: always_inline
function '_mm512_unpackhi_epi64' requires target feature 'avx512f', but would
be inlined into function 'i40e_rxq_rearm_common' that is compiled without
support for 'avx512f'
                        dma_addr0_3 = _mm512_unpackhi_epi64(vaddr0_3,
vaddr0_3);
                                      ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:392:18: error: AVX vector argument
of type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:393:18: error: always_inline
function '_mm512_unpackhi_epi64' requires target feature 'avx512f', but would
be inlined into function 'i40e_rxq_rearm_common' that is compiled without
support for 'avx512f'
                        dma_addr4_7 = _mm512_unpackhi_epi64(vaddr4_7,
vaddr4_7);
                                      ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:393:18: error: AVX vector argument
of type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:396:18: error: always_inline
function '_mm512_add_epi64' requires target feature 'avx512f', but would be
inlined into function 'i40e_rxq_rearm_common' that is compiled without support
for 'avx512f'
                        dma_addr0_3 = _mm512_add_epi64(dma_addr0_3, hdr_room);
                                      ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:396:18: error: AVX vector argument
of type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:397:18: error: always_inline
function '_mm512_add_epi64' requires target feature 'avx512f', but would be
inlined into function 'i40e_rxq_rearm_common' that is compiled without support
for 'avx512f'
                        dma_addr4_7 = _mm512_add_epi64(dma_addr4_7, hdr_room);
                                      ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:397:18: error: AVX vector argument
of type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:400:4: error: always_inline function
'_mm512_store_si512' requires target feature 'avx512f', but would be inlined
into function 'i40e_rxq_rearm_common' that is compiled without support for
'avx512f'
                        _mm512_store_si512((__m512i *)&rxdp->read,
dma_addr0_3);
                        ^
../drivers/net/i40e/i40e_rxtx_vec_common.h:400:4: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/i40e/i40e_rxtx_vec_common.h:401:4: error: always_inline function
'_mm512_store_si512' requires target feature 'avx512f', but would be inlined
into function 'i40e_rxq_rearm_common' that is compiled without support for
'avx512f'
                        _mm512_store_si512((__m512i *)&(rxdp + 4)->read,
dma_addr4_7);
                        ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
[119/890] Compiling C object
drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o
FAILED: drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o 
cc -Idrivers/libtmp_rte_net_ice.a.p -Idrivers -I../drivers -Idrivers/net/ice
-I../drivers/net/ice -Idrivers/net/ice/base -I../drivers/net/ice/base
-Idrivers/common/iavf -I../drivers/common/iavf -Ilib/ethdev -I../lib/ethdev -I.
-I.. -Iconfig -I../config -Ilib/eal/include -I../lib/eal/include
-Ilib/eal/freebsd/include -I../lib/eal/freebsd/include -Ilib/eal/x86/include
-I../lib/eal/x86/include -Ilib/eal/common -I../lib/eal/common -Ilib/eal
-I../lib/eal -Ilib/kvargs -I../lib/kvargs -Ilib/metrics -I../lib/metrics
-Ilib/telemetry -I../lib/telemetry -Ilib/net -I../lib/net -Ilib/mbuf
-I../lib/mbuf -Ilib/mempool -I../lib/mempool -Ilib/ring -I../lib/ring
-Ilib/meter -I../lib/meter -Idrivers/bus/pci -I../drivers/bus/pci
-I../drivers/bus/pci/bsd -Ilib/pci -I../lib/pci -Idrivers/bus/vdev
-I../drivers/bus/vdev -Ilib/hash -I../lib/hash -Ilib/rcu -I../lib/rcu
-fcolor-diagnostics -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -O3 -include
rte_config.h -Wextra -Wcast-qual -Wdeprecated -Wformat -Wformat-nonliteral
-Wformat-security -Wmissing-declarations -Wmissing-prototypes -Wnested-externs
-Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes
-Wundef -Wwrite-strings -Wno-address-of-packed-member
-Wno-missing-field-initializers -D_GNU_SOURCE -D__BSD_VISIBLE -fPIC
-march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -DCC_AVX2_SUPPORT
-DCC_AVX512_SUPPORT -DRTE_LOG_DEFAULT_LOGTYPE=pmd.net.ice -MD -MQ
drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o -MF
drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o.d -o
drivers/libtmp_rte_net_ice.a.p/net_ice_ice_rxtx_vec_avx2.c.o -c
../drivers/net/ice/ice_rxtx_vec_avx2.c
In file included from ../drivers/net/ice/ice_rxtx_vec_avx2.c:5:
../drivers/net/ice/ice_rxtx_vec_common.h:422:22: error: always_inline function
'_mm512_set1_epi64' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                __m512i hdr_room = _mm512_set1_epi64(RTE_PKTMBUF_HEADROOM);
                                   ^
../drivers/net/ice/ice_rxtx_vec_common.h:422:22: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:470:5: error:
'__builtin_ia32_inserti64x4' needs target feature avx512f
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr0_1),
                                ^
/usr/lib/clang/11.0.1/include/avx512fintrin.h:7413:12: note: expanded from
macro '_mm512_inserti64x4'
  (__m512i)__builtin_ia32_inserti64x4((__v8di)(__m512i)(A), \
           ^
In file included from ../drivers/net/ice/ice_rxtx_vec_avx2.c:5:
../drivers/net/ice/ice_rxtx_vec_common.h:470:24: error: always_inline function
'_mm512_castsi256_si512' requires target feature 'avx512f', but would be
inlined into function 'ice_rxq_rearm_common' that is compiled without support
for 'avx512f'
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr0_1),
                                                   ^
../drivers/net/ice/ice_rxtx_vec_common.h:470:24: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:473:5: error:
'__builtin_ia32_inserti64x4' needs target feature avx512f
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr4_5),
                                ^
/usr/lib/clang/11.0.1/include/avx512fintrin.h:7413:12: note: expanded from
macro '_mm512_inserti64x4'
  (__m512i)__builtin_ia32_inserti64x4((__v8di)(__m512i)(A), \
           ^
In file included from ../drivers/net/ice/ice_rxtx_vec_avx2.c:5:
../drivers/net/ice/ice_rxtx_vec_common.h:473:24: error: always_inline function
'_mm512_castsi256_si512' requires target feature 'avx512f', but would be
inlined into function 'ice_rxq_rearm_common' that is compiled without support
for 'avx512f'
                               
_mm512_inserti64x4(_mm512_castsi256_si512(vaddr4_5),
                                                   ^
../drivers/net/ice/ice_rxtx_vec_common.h:473:24: error: AVX vector return of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:477:18: error: always_inline function
'_mm512_unpackhi_epi64' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                        dma_addr0_3 = _mm512_unpackhi_epi64(vaddr0_3,
vaddr0_3);
                                      ^
../drivers/net/ice/ice_rxtx_vec_common.h:477:18: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:478:18: error: always_inline function
'_mm512_unpackhi_epi64' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                        dma_addr4_7 = _mm512_unpackhi_epi64(vaddr4_7,
vaddr4_7);
                                      ^
../drivers/net/ice/ice_rxtx_vec_common.h:478:18: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:481:18: error: always_inline function
'_mm512_add_epi64' requires target feature 'avx512f', but would be inlined into
function 'ice_rxq_rearm_common' that is compiled without support for 'avx512f'
                        dma_addr0_3 = _mm512_add_epi64(dma_addr0_3, hdr_room);
                                      ^
../drivers/net/ice/ice_rxtx_vec_common.h:481:18: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:482:18: error: always_inline function
'_mm512_add_epi64' requires target feature 'avx512f', but would be inlined into
function 'ice_rxq_rearm_common' that is compiled without support for 'avx512f'
                        dma_addr4_7 = _mm512_add_epi64(dma_addr4_7, hdr_room);
                                      ^
../drivers/net/ice/ice_rxtx_vec_common.h:482:18: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:485:4: error: always_inline function
'_mm512_store_si512' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                        _mm512_store_si512((__m512i *)&rxdp->read,
dma_addr0_3);
                        ^
../drivers/net/ice/ice_rxtx_vec_common.h:485:4: error: AVX vector argument of
type '__m512i' (vector of 8 'long long' values) without 'avx512f' enabled
changes the ABI
../drivers/net/ice/ice_rxtx_vec_common.h:486:4: error: always_inline function
'_mm512_store_si512' requires target feature 'avx512f', but would be inlined
into function 'ice_rxq_rearm_common' that is compiled without support for
'avx512f'
                        _mm512_store_si512((__m512i *)&(rxdp + 4)->read,
dma_addr4_7);
                        ^
fatal error: too many errors emitted, stopping now [-ferror-limit=]
20 errors generated.
ninja: build stopped: subcommand failed.

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[relevance 5%]

* [dpdk-dev] [PATCH v2 0/1] relative path support for ABI compatibility check
    @ 2021-08-11  6:17  8% ` Feifei Wang
  2021-08-11  6:17 17%   ` [dpdk-dev] [PATCH v2 1/1] devtools: add " Feifei Wang
  1 sibling, 1 reply; 200+ results
From: Feifei Wang @ 2021-08-11  6:17 UTC (permalink / raw)
  Cc: dev, nd, Feifei Wang

Add relative path support for ABI compatibility check.

v2: 
1. delete the code simplification patch due to negative effects (Thomas)

Phil Yang (1):
  devtools: add relative path support for ABI compatibility check

 devtools/test-meson-builds.sh | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

-- 
2.25.1


^ permalink raw reply	[relevance 8%]

* [dpdk-dev] [PATCH v2 1/1] devtools: add relative path support for ABI compatibility check
  2021-08-11  6:17  8% ` [dpdk-dev] [PATCH v2 0/1] relative path support for ABI compatibility check Feifei Wang
@ 2021-08-11  6:17 17%   ` Feifei Wang
  0 siblings, 0 replies; 200+ results
From: Feifei Wang @ 2021-08-11  6:17 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: dev, nd, Phil Yang, Feifei Wang, Juraj Linkeš, Ruifeng Wang

From: Phil Yang <phil.yang@arm.com>

Because dpdk guide does not limit the relative path for ABI
compatibility check, users maybe set 'DPDK_ABI_REF_DIR' as a relative
path:

~/dpdk/devtools$ DPDK_ABI_REF_VERSION=v19.11 DPDK_ABI_REF_DIR=build-gcc-shared
./test-meson-builds.sh

And if the DESTDIR is not an absolute path, ninja complains:
+ install_target build-gcc-shared/v19.11/build build-gcc-shared/v19.11/build-gcc-shared
+ rm -rf build-gcc-shared/v19.11/build-gcc-shared
+ echo 'DESTDIR=build-gcc-shared/v19.11/build-gcc-shared ninja -C build-gcc-shared/v19.11/build install'
+ DESTDIR=build-gcc-shared/v19.11/build-gcc-shared
+ ninja -C build-gcc-shared/v19.11/build install
...
ValueError: dst_dir must be absolute, got build-gcc-shared/v19.11/build-gcc-shared/usr/local/share/dpdk/
examples/bbdev_app
...
Error: install directory 'build-gcc-shared/v19.11/build-gcc-shared' does not exist.

To fix this, add relative path support using 'readlink -f'.

Signed-off-by: Phil Yang <phil.yang@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Juraj Linkeš <juraj.linkes@pantheon.tech>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
 devtools/test-meson-builds.sh | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 9ec8e2bc7e..8ddde95276 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -168,7 +168,8 @@ build () # <directory> <target cc | cross file> <ABI check> [meson options]
 	config $srcdir $builds_dir/$targetdir $cross --werror $*
 	compile $builds_dir/$targetdir
 	if [ -n "$DPDK_ABI_REF_VERSION" -a "$abicheck" = ABI ] ; then
-		abirefdir=${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION
+		abirefdir=$(readlink -f \
+			${DPDK_ABI_REF_DIR:-reference}/$DPDK_ABI_REF_VERSION)
 		if [ ! -d $abirefdir/$targetdir ]; then
 			# clone current sources
 			if [ ! -d $abirefdir/src ]; then
-- 
2.25.1


^ permalink raw reply	[relevance 17%]

* Re: [dpdk-dev] [RFC] ethdev: change queue release callback
  2021-08-10  9:07  0%           ` Xueming(Steven) Li
@ 2021-08-11 11:57  0%             ` Ferruh Yigit
  2021-08-11 12:13  0%               ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-11 11:57 UTC (permalink / raw)
  To: Xueming(Steven) Li, Singh, Aman Deep, Andrew Rybchenko
  Cc: dev, Slava Ovsiienko, NBU-Contact-Thomas Monjalon, jerinj

On 8/10/2021 10:07 AM, Xueming(Steven) Li wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Tuesday, August 10, 2021 4:54 PM
>> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko
>> <andrew.rybchenko@oktetlabs.ru>
>> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
>> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
>>
>> On 8/10/2021 9:03 AM, Xueming(Steven) Li wrote:
>>> Hi Singh and Ferruh,
>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Sent: Monday, August 9, 2021 11:31 PM
>>>> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko
>>>> <andrew.rybchenko@oktetlabs.ru>; Xueming(Steven) Li
>>>> <xuemingl@nvidia.com>
>>>> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>;
>>>> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
>>>> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
>>>>
>>>> On 8/9/2021 3:39 PM, Singh, Aman Deep wrote:
>>>>> Hi Xueming,
>>>>>
>>>>> On 7/28/2021 1:10 PM, Andrew Rybchenko wrote:
>>>>>> On 7/27/21 6:41 AM, Xueming Li wrote:
>>>>>>> To align with other eth device queue configuration callbacks,
>>>>>>> change RX and TX queue release callback API parameter from queue
>>>>>>> object to device and queue index.
>>>>>>>
>>>>>>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
>>>>>>
>>>>>> In fact, there is no strong reasons to do it, but I think it is a
>>>>>> nice cleanup to use (dev + queue index) on control path.
>>>>>>
>>>>>> Hopefully it will not result in any regressions.
>>>>>
>>>>> Combined there are 100+ API's for Rx/Tx queue_release that need to
>>>>> be modified for it.
>>>>>
>>>>> I believe all regression possibilities here will be caught, in
>>>>> compilation phase itself.
>>>>>
>>>>
>>>> Same here, it is a good cleanup but there is no strong reason for it.
>>>>
>>>> Since it is all internal, there is no ABI restriction on the patch,
>>>> and v21.11 will be full ABI break patches, to not cause conflicts with this change, what would you think to have it on v22.02?
>>>
>>> This patch is required by shared-rxq feature which ABI broken, target to 21.11.
>>
>> Why it is required?
> 
> In rx burst function, rxq object is used in data path. For best data performance, it's shared-rxq object in case of shared rxq enabled.
> I think eth api defined rxq object for performance as well, specific on data plane. 
> Hardware saves port info received packet descriptor for my case.
> Can't tell which device's queue with this shared rxq object, control path can't use this shared rxq anymore, have to be specific on dev and queue id.
> 

I have seen shared Rx queue patch, but that just introduces the offload and
doesn't have the PMD implementation, so hard to see the dependency, can you
please put the pseudocode for PMDs for shared-rxq?
How a queue will know if it is shared or not, during release?

Btw, shared Rx doesn't mention from this dependency in the patch.

>>
>>> I'll do it carefully, fortunately, the change is straightforward.
>>>
> 


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] ethdev: change queue release callback
  2021-08-11 11:57  0%             ` Ferruh Yigit
@ 2021-08-11 12:13  0%               ` Xueming(Steven) Li
  2021-08-12 14:29  0%                 ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-08-11 12:13 UTC (permalink / raw)
  To: Ferruh Yigit, Singh, Aman Deep, Andrew Rybchenko
  Cc: dev, Slava Ovsiienko, NBU-Contact-Thomas Monjalon, jerinj



> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Wednesday, August 11, 2021 7:58 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> jerinj@marvell.com
> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> 
> On 8/10/2021 10:07 AM, Xueming(Steven) Li wrote:
> >
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Sent: Tuesday, August 10, 2021 4:54 PM
> >> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Singh, Aman Deep
> >> <aman.deep.singh@intel.com>; Andrew Rybchenko
> >> <andrew.rybchenko@oktetlabs.ru>
> >> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>;
> >> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> >> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> >>
> >> On 8/10/2021 9:03 AM, Xueming(Steven) Li wrote:
> >>> Hi Singh and Ferruh,
> >>>
> >>>> -----Original Message-----
> >>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>>> Sent: Monday, August 9, 2021 11:31 PM
> >>>> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko
> >>>> <andrew.rybchenko@oktetlabs.ru>; Xueming(Steven) Li
> >>>> <xuemingl@nvidia.com>
> >>>> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>;
> >>>> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> >>>> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> >>>>
> >>>> On 8/9/2021 3:39 PM, Singh, Aman Deep wrote:
> >>>>> Hi Xueming,
> >>>>>
> >>>>> On 7/28/2021 1:10 PM, Andrew Rybchenko wrote:
> >>>>>> On 7/27/21 6:41 AM, Xueming Li wrote:
> >>>>>>> To align with other eth device queue configuration callbacks,
> >>>>>>> change RX and TX queue release callback API parameter from queue
> >>>>>>> object to device and queue index.
> >>>>>>>
> >>>>>>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> >>>>>>
> >>>>>> In fact, there is no strong reasons to do it, but I think it is a
> >>>>>> nice cleanup to use (dev + queue index) on control path.
> >>>>>>
> >>>>>> Hopefully it will not result in any regressions.
> >>>>>
> >>>>> Combined there are 100+ API's for Rx/Tx queue_release that need to
> >>>>> be modified for it.
> >>>>>
> >>>>> I believe all regression possibilities here will be caught, in
> >>>>> compilation phase itself.
> >>>>>
> >>>>
> >>>> Same here, it is a good cleanup but there is no strong reason for it.
> >>>>
> >>>> Since it is all internal, there is no ABI restriction on the patch,
> >>>> and v21.11 will be full ABI break patches, to not cause conflicts with this change, what would you think to have it on v22.02?
> >>>
> >>> This patch is required by shared-rxq feature which ABI broken, target to 21.11.
> >>
> >> Why it is required?
> >
> > In rx burst function, rxq object is used in data path. For best data performance, it's shared-rxq object in case of shared rxq enabled.
> > I think eth api defined rxq object for performance as well, specific on data plane.
> > Hardware saves port info received packet descriptor for my case.
> > Can't tell which device's queue with this shared rxq object, control path can't use this shared rxq anymore, have to be specific on
> dev and queue id.
> >
> 
> I have seen shared Rx queue patch, but that just introduces the offload and doesn't have the PMD implementation, so hard to see the
> dependency, can you please put the pseudocode for PMDs for shared-rxq?

The code is almost ready, I'll upload the PMD part soon.
But firstly, I'll upload v1 patch for this RFC, the make PMD patches depends on this v1 patch.

> How a queue will know if it is shared or not, during release?

That's why this RFC want to change callback parameter to device and queue id.
There is an offloading flag during rxq setup, either in device or in queue configuration.
PMD driver saves the flag and operate accordingly.
Ethdev api doesn't need to save this, unless a solid reason.

> 
> Btw, shared Rx doesn't mention from this dependency in the patch.

Agree, indeed a strong dependency, thanks!

> 
> >>
> >>> I'll do it carefully, fortunately, the change is straightforward.
> >>>
> >


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] ethdev: change queue release callback
  2021-08-11 12:13  0%               ` Xueming(Steven) Li
@ 2021-08-12 14:29  0%                 ` Xueming(Steven) Li
  0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-08-12 14:29 UTC (permalink / raw)
  To: Xueming(Steven) Li, Ferruh Yigit, Singh, Aman Deep, Andrew Rybchenko
  Cc: dev, Slava Ovsiienko, NBU-Contact-Thomas Monjalon, jerinj



> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Xueming(Steven) Li
> Sent: Wednesday, August 11, 2021 8:13 PM
> To: Ferruh Yigit <ferruh.yigit@intel.com>; Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew Rybchenko
> <andrew.rybchenko@oktetlabs.ru>
> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> jerinj@marvell.com
> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> 
> 
> 
> > -----Original Message-----
> > From: Ferruh Yigit <ferruh.yigit@intel.com>
> > Sent: Wednesday, August 11, 2021 7:58 PM
> > To: Xueming(Steven) Li <xuemingl@nvidia.com>; Singh, Aman Deep
> > <aman.deep.singh@intel.com>; Andrew Rybchenko
> > <andrew.rybchenko@oktetlabs.ru>
> > Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>;
> > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; jerinj@marvell.com
> > Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> >
> > On 8/10/2021 10:07 AM, Xueming(Steven) Li wrote:
> > >
> > >
> > >> -----Original Message-----
> > >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> > >> Sent: Tuesday, August 10, 2021 4:54 PM
> > >> To: Xueming(Steven) Li <xuemingl@nvidia.com>; Singh, Aman Deep
> > >> <aman.deep.singh@intel.com>; Andrew Rybchenko
> > >> <andrew.rybchenko@oktetlabs.ru>
> > >> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>;
> > >> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> > >> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release callback
> > >>
> > >> On 8/10/2021 9:03 AM, Xueming(Steven) Li wrote:
> > >>> Hi Singh and Ferruh,
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
> > >>>> Sent: Monday, August 9, 2021 11:31 PM
> > >>>> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Andrew
> > >>>> Rybchenko <andrew.rybchenko@oktetlabs.ru>; Xueming(Steven) Li
> > >>>> <xuemingl@nvidia.com>
> > >>>> Cc: dev@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>;
> > >>>> NBU-Contact-Thomas Monjalon <thomas@monjalon.net>
> > >>>> Subject: Re: [dpdk-dev] [RFC] ethdev: change queue release
> > >>>> callback
> > >>>>
> > >>>> On 8/9/2021 3:39 PM, Singh, Aman Deep wrote:
> > >>>>> Hi Xueming,
> > >>>>>
> > >>>>> On 7/28/2021 1:10 PM, Andrew Rybchenko wrote:
> > >>>>>> On 7/27/21 6:41 AM, Xueming Li wrote:
> > >>>>>>> To align with other eth device queue configuration callbacks,
> > >>>>>>> change RX and TX queue release callback API parameter from
> > >>>>>>> queue object to device and queue index.
> > >>>>>>>
> > >>>>>>> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > >>>>>>
> > >>>>>> In fact, there is no strong reasons to do it, but I think it is
> > >>>>>> a nice cleanup to use (dev + queue index) on control path.
> > >>>>>>
> > >>>>>> Hopefully it will not result in any regressions.
> > >>>>>
> > >>>>> Combined there are 100+ API's for Rx/Tx queue_release that need
> > >>>>> to be modified for it.
> > >>>>>
> > >>>>> I believe all regression possibilities here will be caught, in
> > >>>>> compilation phase itself.
> > >>>>>
> > >>>>
> > >>>> Same here, it is a good cleanup but there is no strong reason for it.
> > >>>>
> > >>>> Since it is all internal, there is no ABI restriction on the
> > >>>> patch, and v21.11 will be full ABI break patches, to not cause conflicts with this change, what would you think to have it on
> v22.02?
> > >>>
> > >>> This patch is required by shared-rxq feature which ABI broken, target to 21.11.
> > >>
> > >> Why it is required?
> > >
> > > In rx burst function, rxq object is used in data path. For best data performance, it's shared-rxq object in case of shared rxq enabled.
> > > I think eth api defined rxq object for performance as well, specific on data plane.
> > > Hardware saves port info received packet descriptor for my case.
> > > Can't tell which device's queue with this shared rxq object, control
> > > path can't use this shared rxq anymore, have to be specific on
> > dev and queue id.
> > >
> >
> > I have seen shared Rx queue patch, but that just introduces the
> > offload and doesn't have the PMD implementation, so hard to see the dependency, can you please put the pseudocode for PMDs
> for shared-rxq?
> 
> The code is almost ready, I'll upload the PMD part soon.

Seems lots of PMD conflicts to rebase, have to hold it due to other urgent tasks. Here is the overall data structure:
Struct mlx5_rxq_ctrl {
	Bool shared;
	LIST_HEAD owners; // owner rxq(s)
	// datapath resources
}
Struct mlx5_rxq_priv { // rx queue 
	U16 queue_index;
	LIST_ENTRY owner_entry; // membership in shared rxq
	Struct mlx5_rxq_ctrl *ctrl; // save to dev->data->rx_queues[]
	// other per queue resources
}
Rxq_ctrl could be 1:1 mapping to rxq_priv in case of standard rxq, 1:N in case of shared
Shared rxq_ctrl will be released till last owner rxq_priv released.

BTW, v1 posted, please check.

> But firstly, I'll upload v1 patch for this RFC, the make PMD patches depends on this v1 patch.
> 
> > How a queue will know if it is shared or not, during release?
> 
> That's why this RFC want to change callback parameter to device and queue id.
> There is an offloading flag during rxq setup, either in device or in queue configuration.
> PMD driver saves the flag and operate accordingly.
> Ethdev api doesn't need to save this, unless a solid reason.
> 
> >
> > Btw, shared Rx doesn't mention from this dependency in the patch.
> 
> Agree, indeed a strong dependency, thanks!
> 
> >
> > >>
> > >>> I'll do it carefully, fortunately, the change is straightforward.
> > >>>
> > >


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] version: 21.11-rc0
  2021-08-08 19:26 11% [dpdk-dev] [PATCH] version: 21.11-rc0 Thomas Monjalon
@ 2021-08-12 14:36  0% ` Ferruh Yigit
  2021-08-12 18:57  0%   ` [dpdk-dev] [EXT] " Akhil Goyal
  2021-08-17  6:34  4% ` [dpdk-dev] " David Marchand
  1 sibling, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-12 14:36 UTC (permalink / raw)
  To: Thomas Monjalon, dev; +Cc: david.marchand, mdr, Akhil Goyal

On 8/8/2021 8:26 PM, Thomas Monjalon wrote:
> Start a new release cycle with empty release notes.
> 
> The ABI version becomes 22.0.
> The map files are updated to the new ABI major number (22).
> The ABI exceptions are dropped
> and CI ABI checks are disabled
> because compatibility is not preserved.
> 
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>

Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>


(Applied to dpdk-next-net/main until patch merged to main repo.)


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [EXT] Re:  [PATCH] version: 21.11-rc0
  2021-08-12 14:36  0% ` Ferruh Yigit
@ 2021-08-12 18:57  0%   ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-08-12 18:57 UTC (permalink / raw)
  To: Ferruh Yigit, Thomas Monjalon, dev; +Cc: david.marchand, mdr

> On 8/8/2021 8:26 PM, Thomas Monjalon wrote:
> > Start a new release cycle with empty release notes.
> >
> > The ABI version becomes 22.0.
> > The map files are updated to the new ABI major number (22).
> > The ABI exceptions are dropped
> > and CI ABI checks are disabled
> > because compatibility is not preserved.
> >
> > Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> 
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
> 
> 
> (Applied to dpdk-next-net/main until patch merged to main repo.)
Applied to dpdk-next-crypto as well.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCHv3] include: fix sys/queue.h.
  @ 2021-08-12 21:58  3%   ` Dmitry Kozlyuk
  2021-08-13  1:02  1%   ` [dpdk-dev] [PATCHv4] eal: remove sys/queue.h from public headers William Tu
  1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-08-12 21:58 UTC (permalink / raw)
  To: William Tu; +Cc: dev, nick.connolly

2021-08-12 20:05 (UTC+0000), William Tu:
> Currently there are a couple of public header files include

Suggested subject: "eal: remove sys/queue.h from public headers".

1. The state before the patch should be described in the past tense.
2. Really ten times more than "a couple", suggesting "some" (nit).
2. "files _that_ include"?

> 'sys/queue.h', which is a POSIX functionality.

It's not POSIX, it's found on many Unix systems.

> When compiling DPDK with OVS on Windows, we encountered issues such as, found the missing
> header.

This sentence is a little hard to parse. Instead, suggesting:

	This file is missing on Windows. During the build, DPDK uses a
	bundled copy, but it cannot be installed because macros it exports
	may conflict with the ones from application code or environment.

> In file included from ../lib/dpdk.c:27:
> C:\temp\dpdk\include\rte_log.h:24:10: fatal error: 'sys/queue.h' file
> not found

An explanation is missing why <sys/queue.h> embedded in DPDK shouldn't be
installed (see above, maybe you can come up with something better).

> 
> The patch fixes it by removing the #include <sys/queue.h> from
> DPDK public headers, so programs including DPDK headers don't depend
> on POSIX sys/queue.h. For Linux/FreeBSD, DPDK public headers only need a
> handful of macros for list/tailq heads and links. Those macros should be
> provided by DPDK, with RTE_ prefix.

It is worth noting that RTE_ macros must be compatible with <sys/queue.h>
at the level of API (to use with <sys/queue.h> macros in C files) and ABI
(to avoid breaking it).

Nit: "Should" is not the right word for things done in the patch. Same below.

> For Linux and FreeBSD it will just be:
>     #include <sys/queue.h>
>     #define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
>     /* ... */
> For Windows, we copy these definitions from <sys/queue.h> to rte_os.h.

No need to describe what's inside the patch, diff already does it :)

> With this patch, all the public headers should not have
> "#include <sys/queue.h>" or "TAILQ_xxx" macros.
> 
> Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
> Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> Signed-off-by: William Tu <u9012063@gmail.com>
> ---
> v2->v3:
>   * follow the suggestion by Dmitry
>   * run checkpatches, there are some errors but I think either
>     the original file has over 80-char line due to comments,
>     or some false positive about macro.
> v1->v2:
>   - follow the suggestion by Nick and Dmitry
>   - http://mails.dpdk.org/archives/dev/2021-August/216304.html
> 
> Signed-off-by: William Tu <u9012063@gmail.com>
> ---
[...]
> diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
> index 627f0483ab..dc889e5826 100644
> --- a/lib/eal/freebsd/include/rte_os.h
> +++ b/lib/eal/freebsd/include/rte_os.h
> @@ -11,6 +11,39 @@
>   */
>  
>  #include <pthread_np.h>
> +#include <sys/queue.h>
> +
> +/* These macros are compatible with system's sys/queue.h. */
> +#define RTE_TAILQ_INIT(head) TAILQ_INIT(head)
> +#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
> +#define RTE_TAILQ_LAST(head, headname) TAILQ_LAST(head, headname)
> +#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
> +#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
> +#define RTE_TAILQ_EMPTY(head) TAILQ_EMPTY(head)
> +#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
> +#define RTE_TAILQ_HEAD_INITIALIZER(head) TAILQ_HEAD_INITIALIZER(head)
> +#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
> +#define RTE_TAILQ_INSERT_TAIL(head, elm, field) \
> +	TAILQ_INSERT_TAIL(head, elm, field)
> +#define RTE_TAILQ_REMOVE(head, elm, field) TAILQ_REMOVE(head, elm, field)
> +#define RTE_TAILQ_INSERT_BEFORE(listelm, elm, field) \
> +	TAILQ_INSERT_BEFORE(listelm, elm, field)
> +#define RTE_TAILQ_INSERT_AFTER(head, listelm, elm, field) \
> +	TAILQ_INSERT_AFTER(head, listelm, elm, field)
> +#define RTE_TAILQ_INSERT_HEAD(head, elm, field) \
> +	TAILQ_INSERT_HEAD(head, elm, field)
> +
> +#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
> +#define RTE_STAILQ_HEAD_INITIALIZER(head) STAILQ_HEAD_INITIALIZER(head)
> +#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)

Most of these macros are not used in public headers and are not needed.
The idea is that TAILQ_* macros from sys/queue.h can be used in C files
with variables declared with RTE_TAILQ_HEAD/ENTRY in public headers.
Needed macros:
	RTE_TAILQ_HEAD
	RTE_TAILQ_ENTRY
	RTE_TAILQ_FOREACH
	RTE_TAILQ_FIRST (for RTE_TAILQ_FOREACH_SAFE only)
	RTE_TAILQ_NEXT (ditto)
	RTE_STAILQ_HEAD
	RTE_STAILQ_ENTRY

> +
> +/* This is not defined in sys/queue.h */
> +#ifndef TAILQ_FOREACH_SAFE
> +#define TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
> +	for ((var) = RTE_TAILQ_FIRST((head));			\
> +	    (var) && ((tvar) = RTE_TAILQ_NEXT((var), field), 1);	\
> +	    (var) = (tvar))
> +#endif

Please simply change the three usages of TAILQ_FOREACH_SAFE to
RTE_TAILQ_FOREACH_SAFE and remove this one. It cannot be placed in rte_os.h,
because rte_os.h is public and it must not export non-RTE symbols.

All comments to this file obviously apply to Linux version as well.

>  
>  typedef cpuset_t rte_cpuset_t;
>  #define RTE_HAS_CPUSET
[...]
> diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
> index 66c711d458..d0935c5003 100644
> --- a/lib/eal/windows/include/rte_os.h
> +++ b/lib/eal/windows/include/rte_os.h
> @@ -18,6 +18,144 @@
>  extern "C" {
>  #endif
>  
> +#ifdef QUEUE_MACRO_DEBUG_TRACE

IMO we all these debugging macros should be removed from this header,
including their use in user-facing macros.
They are implementation detail for <sys/queue.h> developers.

> +/* Store the last 2 places the queue element or head was altered */
> +struct qm_trace {
> +	unsigned long	 lastline;
> +	unsigned long	 prevline;
> +	const char	*lastfile;
> +	const char	*prevfile;
> +};
> +
> +/**
> + * These macros are compatible with the sys/queue.h provided
> + * at DPDK source code.
> + */
[...]
> +
> +#define	QMD_TAILQ_CHECK_HEAD(head, field)
> +#define	QMD_TAILQ_CHECK_TAIL(head, headname)
> +#define	QMD_TAILQ_CHECK_NEXT(elm, field)
> +#define	QMD_TAILQ_CHECK_PREV(elm, field)

Redundant empty lines below.

> +
> +
> +#define	RTE_TAILQ_EMPTY(head)	((head)->tqh_first == NULL)
> +
> +#define	RTE_TAILQ_FIRST(head)	((head)->tqh_first)
> +
> +#define	RTE_TAILQ_INIT(head) do {					\

I suggest removing all spaces but one before the backslash
so that you don't need to manually align.
At least please keep the lines within 80 characters.

> +	RTE_TAILQ_FIRST((head)) = NULL;					\
> +	(head)->tqh_last = &RTE_TAILQ_FIRST((head));			\
> +	QMD_TRACE_HEAD(head);						\
> +} while (0)
[...]

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCHv4] eal: remove sys/queue.h from public headers.
    2021-08-12 21:58  3%   ` Dmitry Kozlyuk
@ 2021-08-13  1:02  1%   ` William Tu
  2021-08-13  1:11  0%     ` Stephen Hemminger
  2021-08-13  3:36  1%     ` [dpdk-dev] [PATCHv5] " William Tu
  1 sibling, 2 replies; 200+ results
From: William Tu @ 2021-08-13  1:02 UTC (permalink / raw)
  To: dev; +Cc: Dmitry.Kozliuk, nick.connolly

Currently there are some public headers that include 'sys/queue.h', which
is not POSIX, but usually provided by Linux/BSD system library.
(Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
The file is missing on Windows. During the windows build, DPDK uses a
bundled copy, so building DPDK library works fine.  But when OVS or other
applications use DPDK as a library, because some DPDK public headers
include 'sys/queue.h', on Windows, it triggers error due to no such file.

One solution is to installl the 'lib/eal/windows/include/sys/queue.h' into
Windows environment, such as [1]. However, this means DPDK exports the
functinoalities of 'sys/queue.h' into the environment, which might cause
symbols, macros, headers clashing with other applications.

The patch fixes it by removing the "#include <sys/queue.h>" from
DPDK public headers, so programs including DPDK headers don't depend
on system to provide 'sys/queue.h'. When these public headers use
macros such as TAILQ_xxx, we replace it with RTE_ prefix.
For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
under windows. Note that these RTE_ macros are compatible with
<sys/queue.h>, only at the level of API (to use with <sys/queue.h>
macros in C files) and ABI (to avoid breaking it).

Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
With this patch, all the public headers no longer have
"#include <sys/queue.h>" or "TAILQ_xxx" macros.

[1] http://mails.dpdk.org/archives/dev/2021-August/216304.html

Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Signed-off-by: William Tu <u9012063@gmail.com>
---
v3-v4:
* address comments from Dmitry
---
 drivers/bus/auxiliary/private.h            |  1 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h  |  5 ++--
 drivers/bus/dpaa/dpaa_bus.c                |  4 +--
 drivers/bus/fslmc/fslmc_bus.c              |  4 +--
 drivers/bus/fslmc/fslmc_vfio.c             |  9 ++++---
 drivers/bus/ifpga/rte_bus_ifpga.h          |  8 +++---
 drivers/bus/pci/pci_params.c               |  2 ++
 drivers/bus/pci/rte_bus_pci.h              | 13 +++++----
 drivers/bus/pci/windows/pci.c              |  3 +++
 drivers/bus/pci/windows/pci_netuio.c       |  2 ++
 drivers/bus/vdev/rte_bus_vdev.h            |  7 +++--
 drivers/bus/vdev/vdev.c                    |  3 ++-
 drivers/bus/vmbus/rte_bus_vmbus.h          | 13 +++++----
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c         |  2 +-
 drivers/net/bonding/rte_eth_bond_flow.c    |  2 +-
 drivers/net/failsafe/failsafe_flow.c       |  2 +-
 drivers/net/i40e/i40e_ethdev.c             |  9 ++++---
 drivers/net/i40e/i40e_ethdev.h             |  1 +
 drivers/net/i40e/i40e_flow.c               |  6 ++---
 drivers/net/i40e/i40e_hash.c               |  2 +-
 drivers/net/i40e/rte_pmd_i40e.c            |  6 ++---
 drivers/net/iavf/iavf_generic_flow.c       | 14 +++++-----
 drivers/net/ice/ice_dcf_ethdev.c           |  1 +
 drivers/net/ice/ice_ethdev.c               |  4 +--
 drivers/net/ice/ice_generic_flow.c         | 14 +++++-----
 drivers/net/softnic/rte_eth_softnic_flow.c |  3 ++-
 drivers/net/softnic/rte_eth_softnic_swq.c  |  2 +-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c        |  2 +-
 lib/bbdev/rte_bbdev.h                      |  2 +-
 lib/cryptodev/rte_cryptodev.h              |  2 +-
 lib/cryptodev/rte_cryptodev_pmd.h          |  2 +-
 lib/eal/common/eal_common_devargs.c        |  6 +++--
 lib/eal/common/eal_common_fbarray.c        |  1 +
 lib/eal/common/eal_common_log.c            |  1 +
 lib/eal/common/eal_common_memalloc.c       |  1 +
 lib/eal/common/eal_common_options.c        |  3 ++-
 lib/eal/common/eal_trace.h                 |  2 ++
 lib/eal/freebsd/include/rte_os.h           | 15 +++++++++++
 lib/eal/include/rte_bus.h                  |  5 ++--
 lib/eal/include/rte_class.h                |  6 ++---
 lib/eal/include/rte_dev.h                  |  5 ++--
 lib/eal/include/rte_devargs.h              |  3 +--
 lib/eal/include/rte_log.h                  |  1 -
 lib/eal/include/rte_service.h              |  1 -
 lib/eal/include/rte_tailq.h                | 15 +++++------
 lib/eal/linux/include/rte_os.h             | 15 +++++++++++
 lib/eal/windows/eal_alarm.c                |  1 +
 lib/eal/windows/include/rte_os.h           | 31 ++++++++++++++++++++++
 lib/efd/rte_efd.c                          |  2 +-
 lib/ethdev/rte_ethdev_core.h               |  2 +-
 lib/hash/rte_fbk_hash.h                    |  1 -
 lib/hash/rte_thash.c                       |  2 ++
 lib/ip_frag/rte_ip_frag.h                  |  4 +--
 lib/mempool/rte_mempool.c                  |  2 +-
 lib/mempool/rte_mempool.h                  |  9 +++----
 lib/pci/rte_pci.h                          |  1 -
 lib/ring/rte_ring_core.h                   |  1 -
 lib/table/rte_swx_table.h                  |  7 ++---
 lib/table/rte_swx_table_selector.h         |  5 ++--
 lib/vhost/iotlb.c                          | 11 ++++----
 lib/vhost/rte_vdpa_dev.h                   |  2 +-
 lib/vhost/vdpa.c                           |  2 +-
 62 files changed, 193 insertions(+), 120 deletions(-)

diff --git a/drivers/bus/auxiliary/private.h b/drivers/bus/auxiliary/private.h
index 9987e8b501..d22e83cf7a 100644
--- a/drivers/bus/auxiliary/private.h
+++ b/drivers/bus/auxiliary/private.h
@@ -7,6 +7,7 @@
 
 #include <stdbool.h>
 #include <stdio.h>
+#include <sys/queue.h>
 
 #include "rte_bus_auxiliary.h"
 
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..b1f5610404 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -113,7 +112,7 @@ typedef int (rte_auxiliary_dma_unmap_t)(struct rte_auxiliary_device *dev,
  * A structure describing an auxiliary device.
  */
 struct rte_auxiliary_device {
-	TAILQ_ENTRY(rte_auxiliary_device) next;   /**< Next probed device. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
 	struct rte_device device;                 /**< Inherit core device */
 	char name[RTE_DEV_NAME_MAX_LEN + 1];      /**< ASCII device name */
 	struct rte_intr_handle intr_handle;       /**< Interrupt handle */
@@ -124,7 +123,7 @@ struct rte_auxiliary_device {
  * A structure describing an auxiliary driver.
  */
 struct rte_auxiliary_driver {
-	TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
 	struct rte_driver driver;             /**< Inherit core driver. */
 	struct rte_auxiliary_bus *bus;        /**< Auxiliary bus reference. */
 	rte_auxiliary_match_t *match;         /**< Device match function. */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..6cab2ae760 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -105,7 +105,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		comp = compare_dpaa_devices(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -245,7 +245,7 @@ dpaa_clean_device_list(void)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..8c8f8a298d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -45,7 +45,7 @@ cleanup_fslmc_device_list(void)
 	struct rte_dpaa2_device *dev;
 	struct rte_dpaa2_device *t_dev;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
 		TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
@@ -82,7 +82,7 @@ insert_in_device_list(struct rte_dpaa2_device *newdev)
 	struct rte_dpaa2_device *dev = NULL;
 	struct rte_dpaa2_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
 		comp = compare_dpaa2_devname(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..852fcfc4dd 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -808,7 +808,8 @@ fslmc_vfio_process_group(void)
 	bool is_dpmcp_in_blocklist = false, is_dpio_in_blocklist = false;
 	int dpmcp_count = 0, dpio_count = 0, current_device;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			dpmcp_count++;
 			if (dev->device.devargs &&
@@ -825,7 +826,8 @@ fslmc_vfio_process_group(void)
 
 	/* Search the MCP as that should be initialized first. */
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			current_device++;
 			if (dev->device.devargs &&
@@ -872,7 +874,8 @@ fslmc_vfio_process_group(void)
 	}
 
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_IO)
 			current_device++;
 		if (dev->device.devargs &&
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..0186f5acde 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -28,9 +28,9 @@ struct rte_afu_device;
 struct rte_afu_driver;
 
 /** Double linked list of Intel FPGA AFU device. */
-TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
+RTE_TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
 /** Double linked list of Intel FPGA AFU device drivers. */
-TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
+RTE_TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
 
 #define IFPGA_BUS_BITSTREAM_PATH_MAX_LEN 256
 
@@ -71,7 +71,7 @@ struct rte_afu_shared {
  * A structure describing a AFU device.
  */
 struct rte_afu_device {
-	TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
+	RTE_TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
 	struct rte_device device;               /**< Inherit core device */
 	struct rte_rawdev *rawdev;    /**< Point Rawdev */
 	struct rte_afu_id id;                   /**< AFU id within FPGA. */
@@ -105,7 +105,7 @@ typedef int (afu_remove_t)(struct rte_afu_device *);
  * A structure describing a AFU device.
  */
 struct rte_afu_driver {
-	TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
+	RTE_TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
 	struct rte_driver driver;               /**< Inherit core driver. */
 	afu_probe_t *probe;                     /**< Device Probe function. */
 	afu_remove_t *remove;                   /**< Device Remove function. */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 3192e9c967..717388753d 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -2,6 +2,8 @@
  * Copyright 2018 Gaëtan Rivet
  */
 
+#include <sys/queue.h>
+
 #include <rte_bus.h>
 #include <rte_bus_pci.h>
 #include <rte_dev.h>
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..673a2850c1 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -37,16 +36,16 @@ struct rte_pci_device;
 struct rte_pci_driver;
 
 /** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+RTE_TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
 /** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+RTE_TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
 
 /* PCI Bus iterators */
 #define FOREACH_DEVICE_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
 
 struct rte_devargs;
 
@@ -64,7 +63,7 @@ enum rte_pci_kernel_driver {
  * A structure describing a PCI device.
  */
 struct rte_pci_device {
-	TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
+	RTE_TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
 	struct rte_device device;           /**< Inherit core device */
 	struct rte_pci_addr addr;           /**< PCI location. */
 	struct rte_pci_id id;               /**< PCI ID. */
@@ -160,7 +159,7 @@ typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
  * A structure describing a PCI driver.
  */
 struct rte_pci_driver {
-	TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
 	struct rte_driver driver;          /**< Inherit core driver. */
 	struct rte_pci_bus *bus;           /**< PCI bus reference. */
 	rte_pci_probe_t *probe;            /**< Device probe function. */
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index d39a7748b8..d7bd5d6e80 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -1,6 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright 2020 Mellanox Technologies, Ltd
  */
+
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/pci/windows/pci_netuio.c b/drivers/bus/pci/windows/pci_netuio.c
index 1bf9133f71..a0b175a8fc 100644
--- a/drivers/bus/pci/windows/pci_netuio.c
+++ b/drivers/bus/pci/windows/pci_netuio.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2020 Intel Corporation.
  */
 
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
index fc315d10fa..2856799953 100644
--- a/drivers/bus/vdev/rte_bus_vdev.h
+++ b/drivers/bus/vdev/rte_bus_vdev.h
@@ -15,12 +15,11 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <rte_dev.h>
 #include <rte_devargs.h>
 
 struct rte_vdev_device {
-	TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
+	RTE_TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
 	struct rte_device device;               /**< Inherit core device */
 };
 
@@ -53,7 +52,7 @@ rte_vdev_device_args(const struct rte_vdev_device *dev)
 }
 
 /** Double linked list of virtual device drivers. */
-TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
+RTE_TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
 
 /**
  * Probe function called for each virtual device driver once.
@@ -107,7 +106,7 @@ typedef int (rte_vdev_dma_unmap_t)(struct rte_vdev_device *dev, void *addr,
  * A virtual device driver abstraction.
  */
 struct rte_vdev_driver {
-	TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
 	struct rte_driver driver;        /**< Inherited general driver. */
 	rte_vdev_probe_t *probe;         /**< Virtual device probe function. */
 	rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 281a2c34e8..a8d8b2327e 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -100,7 +100,8 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 	struct vdev_custom_scan *custom_scan, *tmp_scan;
 
 	rte_spinlock_lock(&vdev_custom_scan_lock);
-	TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next, tmp_scan) {
+	RTE_TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next,
+				tmp_scan) {
 		if (custom_scan->callback != callback ||
 				(custom_scan->user_arg != (void *)-1 &&
 				custom_scan->user_arg != user_arg))
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..6bcff66468 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -20,7 +20,6 @@ extern "C" {
 #include <limits.h>
 #include <stdbool.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -38,15 +37,15 @@ struct rte_vmbus_bus;
 struct vmbus_channel;
 struct vmbus_mon_page;
 
-TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
-TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
+RTE_TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
+RTE_TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
 
 /* VMBus iterators */
 #define FOREACH_DEVICE_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
 
 /** Maximum number of VMBUS resources. */
 enum hv_uio_map {
@@ -62,7 +61,7 @@ enum hv_uio_map {
  * A structure describing a VMBUS device.
  */
 struct rte_vmbus_device {
-	TAILQ_ENTRY(rte_vmbus_device) next;    /**< Next probed VMBUS device */
+	RTE_TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
 	const struct rte_vmbus_driver *driver; /**< Associated driver */
 	struct rte_device device;              /**< Inherit core device */
 	rte_uuid_t device_id;		       /**< VMBUS device id */
@@ -93,7 +92,7 @@ typedef int (vmbus_remove_t)(struct rte_vmbus_device *);
  * A structure describing a VMBUS driver.
  */
 struct rte_vmbus_driver {
-	TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
 	struct rte_driver driver;
 	struct rte_vmbus_bus *bus;          /**< VM bus reference. */
 	vmbus_probe_t *probe;               /**< Device Probe function. */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index dbf85e4eda..ac86b70caf 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -2018,7 +2018,7 @@ bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx)
 	struct ulp_context_list_entry	*entry, *temp;
 
 	rte_spinlock_lock(&bnxt_ulp_ctxt_lock);
-	TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
 		if (entry->ulp_ctx == ulp_ctx) {
 			TAILQ_REMOVE(&ulp_cntx_list, entry, next);
 			rte_free(entry);
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 417f76bf60..65b77faae7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -157,7 +157,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	/* Destroy all bond flows from its slaves instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
-	TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
 		lret = bond_flow_destroy(dev, flow, err);
 		if (unlikely(lret != 0))
 			ret = lret;
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index 5e2b5f7c67..354f9fec20 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -180,7 +180,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
 			return ret;
 		}
 	}
-	TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
 		TAILQ_REMOVE(&PRIV(dev)->flow_list, flow, next);
 		fs_flow_release(&flow);
 	}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..6590363556 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5436,7 +5436,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* VSI has child to attach, release child first */
 	if (vsi->veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5444,7 +5444,8 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 	}
 
 	if (vsi->floating_veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head,
+			list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5452,7 +5453,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* Remove all macvlan filters of the VSI */
 	i40e_vsi_remove_all_macvlan_filter(vsi);
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		rte_free(f);
 
 	if (vsi->type != I40E_VSI_MAIN &&
@@ -6055,7 +6056,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
 	i = 0;
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		mac_filter[i] = f->mac_info;
 		ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr);
 		if (ret) {
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60..374b73e4a7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -6,6 +6,7 @@
 #define _I40E_ETHDEV_H_
 
 #include <stdint.h>
+#include <sys/queue.h>
 
 #include <rte_time.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c..e41a84f1d7 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4917,7 +4917,7 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
 		}
 
 		/* Delete FDIR flows in flow list. */
-		TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 			if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
 				TAILQ_REMOVE(&pf->flow_list, flow, node);
 			}
@@ -4972,7 +4972,7 @@ i40e_flow_flush_ethertype_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete ethertype flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_ETHERTYPE) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
@@ -5000,7 +5000,7 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete tunnel flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_TUNNEL) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfc..6579b1a00b 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -1366,7 +1366,7 @@ i40e_hash_filter_flush(struct i40e_pf *pf)
 {
 	struct rte_flow *flow, *next;
 
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
 		if (flow->filter_type != RTE_ETH_FILTER_HASH)
 			continue;
 
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index 2e34140c5b..ec24046440 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -216,7 +216,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* remove all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		vlan_num = vsi->vlan_num;
 		filter_type = f->mac_info.filter_type;
 		if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -274,7 +274,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* restore all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		if (f->mac_info.filter_type == I40E_MACVLAN_PERFECT_MATCH ||
 		    f->mac_info.filter_type == I40E_MACVLAN_HASH_MATCH) {
 			/**
@@ -563,7 +563,7 @@ rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
 	rte_ether_addr_copy(mac_addr, &vf->mac_addr);
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		if (i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr)
 				!= I40E_SUCCESS)
 			PMD_DRV_LOG(WARNING, "Delete MAC failed");
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 1fe270fb22..b86d99e57d 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -1637,7 +1637,7 @@ iavf_flow_init(struct iavf_adapter *ad)
 	TAILQ_INIT(&vf->dist_parser_list);
 	rte_spinlock_init(&vf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 				     engine->type);
@@ -1663,7 +1663,7 @@ iavf_flow_uninit(struct iavf_adapter *ad)
 	struct iavf_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1733,7 +1733,7 @@ iavf_unregister_parser(struct iavf_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -1917,7 +1917,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -1946,7 +1946,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2089,7 +2089,7 @@ iavf_flow_is_valid(struct rte_flow *flow)
 	void *temp;
 
 	if (flow && flow->engine) {
-		TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 			if (engine == flow->engine)
 				return true;
 		}
@@ -2142,7 +2142,7 @@ iavf_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
 		ret = iavf_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..629e88980d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -4,6 +4,7 @@
 
 #include <errno.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 #include <sys/types.h>
 #include <unistd.h>
 
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..fadd5f2e5a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1104,7 +1104,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (!vsi || !vsi->mac_num)
 		return -EINVAL;
 
-	TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
 		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
@@ -1115,7 +1115,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (vsi->vlan_num == 0)
 		return 0;
 
-	TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
 		ret = ice_remove_vlan_filter(vsi, &v_f->vlan_info.vlan);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 66b5743abf..3e557efe0c 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1820,7 +1820,7 @@ ice_flow_init(struct ice_adapter *ad)
 	TAILQ_INIT(&pf->dist_parser_list);
 	rte_spinlock_init(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 					engine->type);
@@ -1846,7 +1846,7 @@ ice_flow_uninit(struct ice_adapter *ad)
 	struct ice_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1946,7 +1946,7 @@ ice_unregister_parser(struct ice_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -2272,7 +2272,7 @@ ice_parse_engine_create(struct ice_adapter *ad,
 	void *meta = NULL;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		int ret;
 
 		if (parser_node->parser->parse_pattern_action(ad,
@@ -2305,7 +2305,7 @@ ice_parse_engine_validate(struct ice_adapter *ad,
 	struct ice_flow_parser_node *parser_node;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2477,7 +2477,7 @@ ice_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		ret = ice_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
@@ -2541,7 +2541,7 @@ ice_flow_redirect(struct ice_adapter *ad,
 
 	rte_spinlock_lock(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		if (!p_flow->engine->redirect)
 			continue;
 		ret = p_flow->engine->redirect(ad, p_flow, rd);
diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c
index 27eaf380cd..7d054c38d2 100644
--- a/drivers/net/softnic/rte_eth_softnic_flow.c
+++ b/drivers/net/softnic/rte_eth_softnic_flow.c
@@ -2207,7 +2207,8 @@ pmd_flow_flush(struct rte_eth_dev *dev,
 			void *temp;
 			int status;
 
-			TAILQ_FOREACH_SAFE(flow, &table->flows, node, temp) {
+			RTE_TAILQ_FOREACH_SAFE(flow, &table->flows, node,
+				temp) {
 				/* Rule delete. */
 				status = softnic_pipeline_table_rule_delete
 						(softnic,
diff --git a/drivers/net/softnic/rte_eth_softnic_swq.c b/drivers/net/softnic/rte_eth_softnic_swq.c
index 2083d0a976..afe6f05e29 100644
--- a/drivers/net/softnic/rte_eth_softnic_swq.c
+++ b/drivers/net/softnic/rte_eth_softnic_swq.c
@@ -39,7 +39,7 @@ softnic_softnic_swq_free_keep_rxq_txq(struct pmd_internals *p)
 {
 	struct softnic_swq *swq, *tswq;
 
-	TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
+	RTE_TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
 		if ((strncmp(swq->name, "RXQ", strlen("RXQ")) == 0) ||
 			(strncmp(swq->name, "TXQ", strlen("TXQ")) == 0))
 			continue;
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..7b80370b36 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1606,7 +1606,7 @@ remove_hw_queues_from_list(struct dpaa2_dpdmai_dev *dpdmai_dev)
 
 	DPAA2_QDMA_FUNC_TRACE();
 
-	TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
+	RTE_TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
 		if (queue->dpdmai_dev == dpdmai_dev) {
 			TAILQ_REMOVE(&qdma_queue_list, queue, next);
 			rte_free(queue);
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 7017124414..3ebf62e697 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -434,7 +434,7 @@ struct rte_bbdev_callback;
 struct rte_intr_handle;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+RTE_TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
 
 /**
  * @internal The data structure associated with a device. Drivers can access
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..f86bf2260b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -879,7 +879,7 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 struct rte_cryptodev_callback;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+RTE_TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
 
 /**
  * Structure used to hold information about the callbacks to be called for a
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..9542cbf263 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -66,7 +66,7 @@ struct rte_cryptodev_global {
 
 /* Cryptodev driver, containing the driver ID */
 struct cryptodev_driver {
-	TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
 	const struct rte_driver *driver;
 	uint8_t id;
 };
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index 23aaf8b7e4..7edc6798fe 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -9,6 +9,7 @@
 #include <stdio.h>
 #include <string.h>
 #include <stdarg.h>
+#include <sys/queue.h>
 
 #include <rte_bus.h>
 #include <rte_class.h>
@@ -18,6 +19,7 @@
 #include <rte_errno.h>
 #include <rte_kvargs.h>
 #include <rte_log.h>
+#include <rte_os.h>
 #include <rte_tailq.h>
 #include "eal_private.h"
 
@@ -291,7 +293,7 @@ rte_devargs_insert(struct rte_devargs **da)
 	if (*da == NULL || (*da)->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
 		if (listed_da == *da)
 			/* devargs already in the list */
 			return 0;
@@ -358,7 +360,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
 	if (devargs == NULL || devargs->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
 		if (strcmp(d->bus->name, devargs->bus->name) == 0 &&
 		    strcmp(d->name, devargs->name) == 0) {
 			TAILQ_REMOVE(&devargs_list, d, next);
diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c
index 3a28a53247..75168ca552 100644
--- a/lib/eal/common/eal_common_fbarray.c
+++ b/lib/eal/common/eal_common_fbarray.c
@@ -9,6 +9,7 @@
 #include <errno.h>
 #include <string.h>
 #include <unistd.h>
+#include <sys/queue.h>
 
 #include <rte_common.h>
 #include <rte_eal_paging.h>
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index ec8fe23a7f..1be35f5397 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -10,6 +10,7 @@
 #include <errno.h>
 #include <regex.h>
 #include <fnmatch.h>
+#include <sys/queue.h>
 
 #include <rte_eal.h>
 #include <rte_log.h>
diff --git a/lib/eal/common/eal_common_memalloc.c b/lib/eal/common/eal_common_memalloc.c
index e872c6533b..aefdf8de3f 100644
--- a/lib/eal/common/eal_common_memalloc.c
+++ b/lib/eal/common/eal_common_memalloc.c
@@ -3,6 +3,7 @@
  */
 
 #include <string.h>
+#include <sys/queue.h>
 
 #include <rte_errno.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index ff5861b5f3..2cc74b4472 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -6,6 +6,7 @@
 #include <stdlib.h>
 #include <unistd.h>
 #include <string.h>
+#include <sys/queue.h>
 #ifndef RTE_EXEC_ENV_WINDOWS
 #include <syslog.h>
 #endif
@@ -283,7 +284,7 @@ eal_option_device_parse(void)
 	void *tmp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
 		if (ret == 0) {
 			ret = rte_devargs_add(devopt->type, devopt->arg);
 			if (ret)
diff --git a/lib/eal/common/eal_trace.h b/lib/eal/common/eal_trace.h
index 06751eb23a..76fbcd86b0 100644
--- a/lib/eal/common/eal_trace.h
+++ b/lib/eal/common/eal_trace.h
@@ -5,6 +5,8 @@
 #ifndef __EAL_TRACE_H
 #define __EAL_TRACE_H
 
+#include <sys/queue.h>
+
 #include <rte_cycles.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
index 627f0483ab..099ad3f019 100644
--- a/lib/eal/freebsd/include/rte_os.h
+++ b/lib/eal/freebsd/include/rte_os.h
@@ -11,6 +11,21 @@
  */
 
 #include <pthread_np.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define	RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
+
 
 typedef cpuset_t rte_cpuset_t;
 #define RTE_HAS_CPUSET
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index 80b154fb98..84d364df3f 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -19,13 +19,12 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_log.h>
 #include <rte_dev.h>
 
 /** Double linked list of buses */
-TAILQ_HEAD(rte_bus_list, rte_bus);
+RTE_TAILQ_HEAD(rte_bus_list, rte_bus);
 
 
 /**
@@ -250,7 +249,7 @@ typedef enum rte_iova_mode (*rte_bus_get_iommu_class_t)(void);
  * A structure describing a generic bus.
  */
 struct rte_bus {
-	TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
+	RTE_TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
 	const char *name;            /**< Name of the bus */
 	rte_bus_scan_t scan;         /**< Scan for devices attached to bus */
 	rte_bus_probe_t probe;       /**< Probe devices on bus */
diff --git a/lib/eal/include/rte_class.h b/lib/eal/include/rte_class.h
index 856d09b22d..d560339652 100644
--- a/lib/eal/include/rte_class.h
+++ b/lib/eal/include/rte_class.h
@@ -22,18 +22,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
-
 #include <rte_dev.h>
 
 /** Double linked list of classes */
-TAILQ_HEAD(rte_class_list, rte_class);
+RTE_TAILQ_HEAD(rte_class_list, rte_class);
 
 /**
  * A structure describing a generic device class.
  */
 struct rte_class {
-	TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
+	RTE_TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
 	const char *name; /**< Name of the class */
 	rte_dev_iterate_t dev_iterate; /**< Device iterator. */
 };
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6dd72c11a1..f6efe0c94e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -18,7 +18,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_compat.h>
@@ -75,7 +74,7 @@ struct rte_mem_resource {
  * A structure describing a device driver.
  */
 struct rte_driver {
-	TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
 	const char *name;                   /**< Driver name. */
 	const char *alias;              /**< Driver alias. */
 };
@@ -90,7 +89,7 @@ struct rte_driver {
  * A structure describing a generic device.
  */
 struct rte_device {
-	TAILQ_ENTRY(rte_device) next; /**< Next device */
+	RTE_TAILQ_ENTRY(rte_device) next; /**< Next device */
 	const char *name;             /**< Device name */
 	const struct rte_driver *driver; /**< Driver assigned after probing */
 	const struct rte_bus *bus;    /**< Bus handle assigned on scan */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index cd90944fe8..957477b398 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -21,7 +21,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 #include <rte_compat.h>
 #include <rte_bus.h>
 
@@ -76,7 +75,7 @@ enum rte_devtype {
  */
 struct rte_devargs {
 	/** Next in list. */
-	TAILQ_ENTRY(rte_devargs) next;
+	RTE_TAILQ_ENTRY(rte_devargs) next;
 	/** Type of device. */
 	enum rte_devtype type;
 	/** Device policy. */
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index b706bb8710..bb3523467b 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -21,7 +21,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdarg.h>
 #include <stdbool.h>
-#include <sys/queue.h>
 
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/eal/include/rte_service.h b/lib/eal/include/rte_service.h
index c7d037d862..1c9275c32a 100644
--- a/lib/eal/include/rte_service.h
+++ b/lib/eal/include/rte_service.h
@@ -29,7 +29,6 @@ extern "C" {
 
 #include<stdio.h>
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index b6fe4e5f78..28cd54ef3e 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -15,17 +15,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <stdio.h>
 #include <rte_debug.h>
 
 /** dummy structure type used by the rte_tailq APIs */
 struct rte_tailq_entry {
-	TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
+	RTE_TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
 	void *data; /**< Pointer to the data referenced by this tailq entry */
 };
 /** dummy */
-TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
+RTE_TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
 
 #define RTE_TAILQ_NAMESIZE 32
 
@@ -48,7 +47,7 @@ struct rte_tailq_elem {
 	 * rte_eal_tailqs_init()
 	 */
 	struct rte_tailq_head *head;
-	TAILQ_ENTRY(rte_tailq_elem) next;
+	RTE_TAILQ_ENTRY(rte_tailq_elem) next;
 	const char name[RTE_TAILQ_NAMESIZE];
 };
 
@@ -126,10 +125,10 @@ RTE_INIT(tailqinitfn_ ##t) \
 }
 
 /* This macro permits both remove and free var within the loop safely.*/
-#ifndef TAILQ_FOREACH_SAFE
-#define TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
-	for ((var) = TAILQ_FIRST((head));			\
-	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1);	\
+#ifndef RTE_TAILQ_FOREACH_SAFE
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
+	for ((var) = RTE_TAILQ_FIRST((head));			\
+	    (var) && ((tvar) = RTE_TAILQ_NEXT((var), field), 1);	\
 	    (var) = (tvar))
 #endif
 
diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
index 1618b4df22..1a6e5b789f 100644
--- a/lib/eal/linux/include/rte_os.h
+++ b/lib/eal/linux/include/rte_os.h
@@ -11,6 +11,21 @@
  */
 
 #include <sched.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define	RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
+
 
 #ifdef CPU_SETSIZE /* may require _GNU_SOURCE */
 typedef cpu_set_t rte_cpuset_t;
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index e5dc54efb8..103c1f909d 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -4,6 +4,7 @@
 
 #include <stdatomic.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 
 #include <rte_alarm.h>
 #include <rte_spinlock.h>
diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
index 66c711d458..ee7a8c7a08 100644
--- a/lib/eal/windows/include/rte_os.h
+++ b/lib/eal/windows/include/rte_os.h
@@ -18,6 +18,37 @@
 extern "C" {
 #endif
 
+#define	RTE_TAILQ_HEAD(name, type) \
+struct name { \
+	struct type *tqh_first;	/* first element */ \
+	struct type **tqh_last;	/* addr of last next element */	\
+}
+#define	RTE_TAILQ_ENTRY(type) \
+struct { \
+	struct type *tqe_next;	/* next element */ \
+	struct type **tqe_prev;	/* address of previous next element */ \
+}
+#define	RTE_TAILQ_FOREACH(var, head, field) \
+	for ((var) = RTE_TAILQ_FIRST((head)); \
+	    (var); \
+	    (var) = RTE_TAILQ_NEXT((var), field))
+#define	RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define	RTE_TAILQ_FIRST(head)	((head)->tqh_first)
+#define	RTE_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next)
+#define	RTE_STAILQ_HEAD(name, type) \
+struct name { \
+	struct type *stqh_first;/* first element */ \
+	struct type **stqh_last;/* addr of last next element */ \
+}
+#define	RTE_STAILQ_ENTRY(type) \
+struct { \
+	struct type *stqe_next;	/* next element */ \
+}
+
+
 /* cpu_set macros implementation */
 #define RTE_CPU_AND(dst, src1, src2) CPU_AND(dst, src1, src2)
 #define RTE_CPU_OR(dst, src1, src2) CPU_OR(dst, src1, src2)
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 77f46809f8..5bf517fee9 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -759,7 +759,7 @@ rte_efd_free(struct rte_efd_table *table)
 	efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list);
 	rte_mcfg_tailq_write_lock();
 
-	TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
 		if (te->data == (void *) table) {
 			TAILQ_REMOVE(efd_list, te, next);
 			rte_free(te);
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..d2c9ec42c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -21,7 +21,7 @@
 
 struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
+RTE_TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
 struct rte_eth_dev;
 
diff --git a/lib/hash/rte_fbk_hash.h b/lib/hash/rte_fbk_hash.h
index c4d6976d2b..9c3a61c1d6 100644
--- a/lib/hash/rte_fbk_hash.h
+++ b/lib/hash/rte_fbk_hash.h
@@ -17,7 +17,6 @@
 
 #include <stdint.h>
 #include <errno.h>
-#include <sys/queue.h>
 
 #ifdef __cplusplus
 extern "C" {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index d5a95a6e00..696a1121e2 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2021 Intel Corporation
  */
 
+#include <sys/queue.h>
+
 #include <rte_thash.h>
 #include <rte_tailq.h>
 #include <rte_random.h>
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index 0bfe64b14e..80f931c32a 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -62,7 +62,7 @@ struct ip_frag_key {
  * First two entries in the frags[] array are for the last and first fragments.
  */
 struct ip_frag_pkt {
-	TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
+	RTE_TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
 	struct ip_frag_key key;           /**< fragmentation key */
 	uint64_t             start;       /**< creation timestamp */
 	uint32_t             total_size;  /**< expected reassembled size */
@@ -83,7 +83,7 @@ struct rte_ip_frag_death_row {
 	/**< mbufs to be freed */
 };
 
-TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
+RTE_TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
 
 /** fragmentation table statistics */
 struct ip_frag_tbl_stat {
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 59a588425b..c5f859ae71 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1337,7 +1337,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 
 	rte_mcfg_mempool_read_lock();
 
-	TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
+	RTE_TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
 		(*func)((struct rte_mempool *) te->data, arg);
 	}
 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4235d6f0bf..f57ecbd6fc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -38,7 +38,6 @@
 #include <stdint.h>
 #include <errno.h>
 #include <inttypes.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_spinlock.h>
@@ -141,7 +140,7 @@ struct rte_mempool_objsz {
  * double-frees.
  */
 struct rte_mempool_objhdr {
-	STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;          /**< The mempool owning the object. */
 	rte_iova_t iova;                 /**< IO address of the object. */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -152,7 +151,7 @@ struct rte_mempool_objhdr {
 /**
  * A list of object headers type
  */
-STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
+RTE_STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 
@@ -171,7 +170,7 @@ struct rte_mempool_objtlr {
 /**
  * A list of memory where objects are stored
  */
-STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
+RTE_STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
 
 /**
  * Callback used to free a memory chunk
@@ -186,7 +185,7 @@ typedef void (rte_mempool_memchunk_free_cb_t)(struct rte_mempool_memhdr *memhdr,
  * and physically contiguous.
  */
 struct rte_mempool_memhdr {
-	STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;  /**< The mempool owning the chunk */
 	void *addr;              /**< Virtual address of the chunk */
 	rte_iova_t iova;         /**< IO address of the chunk */
diff --git a/lib/pci/rte_pci.h b/lib/pci/rte_pci.h
index 1f33d687f4..71cbd441c7 100644
--- a/lib/pci/rte_pci.h
+++ b/lib/pci/rte_pci.h
@@ -18,7 +18,6 @@ extern "C" {
 
 #include <stdio.h>
 #include <limits.h>
-#include <sys/queue.h>
 #include <inttypes.h>
 #include <sys/types.h>
 
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 16718ca7f1..43ce1a29d4 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -26,7 +26,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdint.h>
 #include <string.h>
-#include <sys/queue.h>
 #include <errno.h>
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h
index e23f2304c6..f93e5f3f95 100644
--- a/lib/table/rte_swx_table.h
+++ b/lib/table/rte_swx_table.h
@@ -16,7 +16,8 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
+
+#include <rte_os.h>
 
 /** Match type. */
 enum rte_swx_table_match_type {
@@ -68,7 +69,7 @@ struct rte_swx_table_entry {
 	/** Used to facilitate the membership of this table entry to a
 	 * linked list.
 	 */
-	TAILQ_ENTRY(rte_swx_table_entry) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_entry) node;
 
 	/** Key value for the current entry. Array of *key_size* bytes or NULL
 	 * if the *key_size* for the current table is 0.
@@ -111,7 +112,7 @@ struct rte_swx_table_entry {
 };
 
 /** List of table entries. */
-TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
+RTE_TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
 
 /**
  * Table memory footprint get
diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h
index 71b6a74810..62988d2856 100644
--- a/lib/table/rte_swx_table_selector.h
+++ b/lib/table/rte_swx_table_selector.h
@@ -16,7 +16,6 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_compat.h>
 
@@ -56,7 +55,7 @@ struct rte_swx_table_selector_params {
 /** Group member parameters. */
 struct rte_swx_table_selector_member {
 	/** Linked list connectivity. */
-	TAILQ_ENTRY(rte_swx_table_selector_member) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_selector_member) node;
 
 	/** Member ID. */
 	uint32_t member_id;
@@ -66,7 +65,7 @@ struct rte_swx_table_selector_member {
 };
 
 /** List of group members. */
-TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
+RTE_TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
 
 /** Group parameters. */
 struct rte_swx_table_selector_group {
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e0b67721b6..e4a445e709 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -32,7 +32,7 @@ vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_pending_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -100,7 +100,8 @@ vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next,
+				temp_node) {
 		if (node->iova < iova)
 			continue;
 		if (node->iova >= iova + size)
@@ -121,7 +122,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -141,7 +142,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
 
 	entry_idx = rte_rand() % vq->iotlb_cache_nr;
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		if (!entry_idx) {
 			TAILQ_REMOVE(&vq->iotlb_list, node, next);
 			rte_mempool_put(vq->iotlb_pool, node);
@@ -218,7 +219,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		/* Sorted list */
 		if (unlikely(iova + size < node->iova))
 			break;
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h
index bfada387b0..b0f494815f 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/rte_vdpa_dev.h
@@ -71,7 +71,7 @@ struct rte_vdpa_dev_ops {
  * vdpa device structure includes device address and device operations.
  */
 struct rte_vdpa_device {
-	TAILQ_ENTRY(rte_vdpa_device) next;
+	RTE_TAILQ_ENTRY(rte_vdpa_device) next;
 	/** Generic device information */
 	struct rte_device *device;
 	/** vdpa device operations */
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 99a926a772..6dd91859ac 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -115,7 +115,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 	int ret = -1;
 
 	rte_spinlock_lock(&vdpa_device_list_lock);
-	TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
+	RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
 		if (dev != cur_dev)
 			continue;
 
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCHv4] eal: remove sys/queue.h from public headers.
  2021-08-13  1:02  1%   ` [dpdk-dev] [PATCHv4] eal: remove sys/queue.h from public headers William Tu
@ 2021-08-13  1:11  0%     ` Stephen Hemminger
  2021-08-13  1:36  0%       ` William Tu
  2021-08-13  3:36  1%     ` [dpdk-dev] [PATCHv5] " William Tu
  1 sibling, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-08-13  1:11 UTC (permalink / raw)
  To: William Tu; +Cc: dev, Dmitry.Kozliuk, nick.connolly

On Fri, 13 Aug 2021 01:02:50 +0000
William Tu <u9012063@gmail.com> wrote:

> Currently there are some public headers that include 'sys/queue.h', which
> is not POSIX, but usually provided by Linux/BSD system library.
> (Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
> The file is missing on Windows. During the windows build, DPDK uses a
> bundled copy, so building DPDK library works fine.  But when OVS or other
> applications use DPDK as a library, because some DPDK public headers
> include 'sys/queue.h', on Windows, it triggers error due to no such file.
> 
> One solution is to installl the 'lib/eal/windows/include/sys/queue.h' into
> Windows environment, such as [1]. However, this means DPDK exports the
> functinoalities of 'sys/queue.h' into the environment, which might cause
> symbols, macros, headers clashing with other applications.
> 
> The patch fixes it by removing the "#include <sys/queue.h>" from
> DPDK public headers, so programs including DPDK headers don't depend
> on system to provide 'sys/queue.h'. When these public headers use
> macros such as TAILQ_xxx, we replace it with RTE_ prefix.
> For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
> under windows. Note that these RTE_ macros are compatible with
> <sys/queue.h>, only at the level of API (to use with <sys/queue.h>
> macros in C files) and ABI (to avoid breaking it).
> 
> Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
> the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
> With this patch, all the public headers no longer have
> "#include <sys/queue.h>" or "TAILQ_xxx" macros.


Please run a spell checker on the commit message if you resubmit.

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCHv4] eal: remove sys/queue.h from public headers.
  2021-08-13  1:11  0%     ` Stephen Hemminger
@ 2021-08-13  1:36  0%       ` William Tu
  0 siblings, 0 replies; 200+ results
From: William Tu @ 2021-08-13  1:36 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: dpdk-dev, Dmitry Kozliuk, Nick Connolly

On Thu, Aug 12, 2021 at 6:11 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Fri, 13 Aug 2021 01:02:50 +0000
> William Tu <u9012063@gmail.com> wrote:
>
> > Currently there are some public headers that include 'sys/queue.h', which
> > is not POSIX, but usually provided by Linux/BSD system library.
> > (Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
> > The file is missing on Windows. During the windows build, DPDK uses a
> > bundled copy, so building DPDK library works fine.  But when OVS or other
> > applications use DPDK as a library, because some DPDK public headers
> > include 'sys/queue.h', on Windows, it triggers error due to no such file.
> >
> > One solution is to installl the 'lib/eal/windows/include/sys/queue.h' into
> > Windows environment, such as [1]. However, this means DPDK exports the
> > functinoalities of 'sys/queue.h' into the environment, which might cause
> > symbols, macros, headers clashing with other applications.
> >
> > The patch fixes it by removing the "#include <sys/queue.h>" from
> > DPDK public headers, so programs including DPDK headers don't depend
> > on system to provide 'sys/queue.h'. When these public headers use
> > macros such as TAILQ_xxx, we replace it with RTE_ prefix.
> > For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
> > under windows. Note that these RTE_ macros are compatible with
> > <sys/queue.h>, only at the level of API (to use with <sys/queue.h>
> > macros in C files) and ABI (to avoid breaking it).
> >
> > Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
> > the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
> > With this patch, all the public headers no longer have
> > "#include <sys/queue.h>" or "TAILQ_xxx" macros.
>
>
> Please run a spell checker on the commit message if you resubmit.

OK, will do it, thanks!
William

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCHv5] eal: remove sys/queue.h from public headers.
  2021-08-13  1:02  1%   ` [dpdk-dev] [PATCHv4] eal: remove sys/queue.h from public headers William Tu
  2021-08-13  1:11  0%     ` Stephen Hemminger
@ 2021-08-13  3:36  1%     ` William Tu
  2021-08-13 18:59  0%       ` Dmitry Kozlyuk
  2021-08-14  2:51  1%       ` [dpdk-dev] [PATCH v6] " William Tu
  1 sibling, 2 replies; 200+ results
From: William Tu @ 2021-08-13  3:36 UTC (permalink / raw)
  To: dev; +Cc: Dmitry.Kozliuk, nick.connolly, stephen


Currently there are some public headers that include 'sys/queue.h', which
is not POSIX, but usually provided by the Linux/BSD system library.
(Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
The file is missing on Windows. During the windows build, DPDK uses a
bundled copy, so building a DPDK library works fine.  But when OVS or other
applications use DPDK as a library, because some DPDK public headers
include 'sys/queue.h', on Windows, it triggers an error due to no such file.

One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
Windows environment, such as [1]. However, this means DPDK exports the
functionalities of 'sys/queue.h' into the environment, which might cause
symbols, macros, headers clashing with other applications.

The patch fixes it by removing the "#include <sys/queue.h>" from
DPDK public headers, so programs including DPDK headers don't depend
on the system to provide 'sys/queue.h'. When these public headers use
macros such as TAILQ_xxx, we replace it with RTE_ prefix.
For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
under windows. Note that these RTE_ macros are compatible with
<sys/queue.h>, only at the level of API (to use with <sys/queue.h>
macros in C files) and ABI (to avoid breaking it).

Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
the patch replaces it with RTE_TAILQ_FOREACH_SAFE.
With this patch, all the public headers no longer have
"#include <sys/queue.h>" or "TAILQ_xxx" macros.

[1] http://mails.dpdk.org/archives/dev/2021-August/216304.html

Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Signed-off-by: William Tu <u9012063@gmail.com>
---
v4-v5
* fix compile error due to drivers/net/ipn3ke/ipn3ke_flow.c:1234
* run spell check
---
 drivers/bus/auxiliary/private.h            |  1 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h  |  5 ++--
 drivers/bus/dpaa/dpaa_bus.c                |  4 +--
 drivers/bus/fslmc/fslmc_bus.c              |  4 +--
 drivers/bus/fslmc/fslmc_vfio.c             |  9 ++++---
 drivers/bus/ifpga/rte_bus_ifpga.h          |  8 +++---
 drivers/bus/pci/pci_params.c               |  2 ++
 drivers/bus/pci/rte_bus_pci.h              | 13 +++++----
 drivers/bus/pci/windows/pci.c              |  3 +++
 drivers/bus/pci/windows/pci_netuio.c       |  2 ++
 drivers/bus/vdev/rte_bus_vdev.h            |  7 +++--
 drivers/bus/vdev/vdev.c                    |  3 ++-
 drivers/bus/vmbus/rte_bus_vmbus.h          | 13 +++++----
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c         |  2 +-
 drivers/net/bonding/rte_eth_bond_flow.c    |  2 +-
 drivers/net/failsafe/failsafe_flow.c       |  2 +-
 drivers/net/i40e/i40e_ethdev.c             |  9 ++++---
 drivers/net/i40e/i40e_ethdev.h             |  1 +
 drivers/net/i40e/i40e_flow.c               |  6 ++---
 drivers/net/i40e/i40e_hash.c               |  2 +-
 drivers/net/i40e/rte_pmd_i40e.c            |  6 ++---
 drivers/net/iavf/iavf_generic_flow.c       | 14 +++++-----
 drivers/net/ice/ice_dcf_ethdev.c           |  1 +
 drivers/net/ice/ice_ethdev.c               |  4 +--
 drivers/net/ice/ice_generic_flow.c         | 14 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c           |  2 +-
 drivers/net/softnic/rte_eth_softnic_flow.c |  3 ++-
 drivers/net/softnic/rte_eth_softnic_swq.c  |  2 +-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c        |  2 +-
 lib/bbdev/rte_bbdev.h                      |  2 +-
 lib/cryptodev/rte_cryptodev.h              |  2 +-
 lib/cryptodev/rte_cryptodev_pmd.h          |  2 +-
 lib/eal/common/eal_common_devargs.c        |  6 +++--
 lib/eal/common/eal_common_fbarray.c        |  1 +
 lib/eal/common/eal_common_log.c            |  1 +
 lib/eal/common/eal_common_memalloc.c       |  1 +
 lib/eal/common/eal_common_options.c        |  3 ++-
 lib/eal/common/eal_trace.h                 |  2 ++
 lib/eal/freebsd/include/rte_os.h           | 15 +++++++++++
 lib/eal/include/rte_bus.h                  |  5 ++--
 lib/eal/include/rte_class.h                |  6 ++---
 lib/eal/include/rte_dev.h                  |  5 ++--
 lib/eal/include/rte_devargs.h              |  3 +--
 lib/eal/include/rte_log.h                  |  1 -
 lib/eal/include/rte_service.h              |  1 -
 lib/eal/include/rte_tailq.h                | 15 +++++------
 lib/eal/linux/include/rte_os.h             | 15 +++++++++++
 lib/eal/windows/eal_alarm.c                |  1 +
 lib/eal/windows/include/rte_os.h           | 31 ++++++++++++++++++++++
 lib/efd/rte_efd.c                          |  2 +-
 lib/ethdev/rte_ethdev_core.h               |  2 +-
 lib/hash/rte_fbk_hash.h                    |  1 -
 lib/hash/rte_thash.c                       |  2 ++
 lib/ip_frag/rte_ip_frag.h                  |  4 +--
 lib/mempool/rte_mempool.c                  |  2 +-
 lib/mempool/rte_mempool.h                  |  9 +++----
 lib/pci/rte_pci.h                          |  1 -
 lib/ring/rte_ring_core.h                   |  1 -
 lib/table/rte_swx_table.h                  |  7 ++---
 lib/table/rte_swx_table_selector.h         |  5 ++--
 lib/vhost/iotlb.c                          | 11 ++++----
 lib/vhost/rte_vdpa_dev.h                   |  2 +-
 lib/vhost/vdpa.c                           |  2 +-
 63 files changed, 194 insertions(+), 121 deletions(-)

diff --git a/drivers/bus/auxiliary/private.h b/drivers/bus/auxiliary/private.h
index 9987e8b501..d22e83cf7a 100644
--- a/drivers/bus/auxiliary/private.h
+++ b/drivers/bus/auxiliary/private.h
@@ -7,6 +7,7 @@
 
 #include <stdbool.h>
 #include <stdio.h>
+#include <sys/queue.h>
 
 #include "rte_bus_auxiliary.h"
 
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..b1f5610404 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -113,7 +112,7 @@ typedef int (rte_auxiliary_dma_unmap_t)(struct rte_auxiliary_device *dev,
  * A structure describing an auxiliary device.
  */
 struct rte_auxiliary_device {
-	TAILQ_ENTRY(rte_auxiliary_device) next;   /**< Next probed device. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
 	struct rte_device device;                 /**< Inherit core device */
 	char name[RTE_DEV_NAME_MAX_LEN + 1];      /**< ASCII device name */
 	struct rte_intr_handle intr_handle;       /**< Interrupt handle */
@@ -124,7 +123,7 @@ struct rte_auxiliary_device {
  * A structure describing an auxiliary driver.
  */
 struct rte_auxiliary_driver {
-	TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
 	struct rte_driver driver;             /**< Inherit core driver. */
 	struct rte_auxiliary_bus *bus;        /**< Auxiliary bus reference. */
 	rte_auxiliary_match_t *match;         /**< Device match function. */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..6cab2ae760 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -105,7 +105,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		comp = compare_dpaa_devices(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -245,7 +245,7 @@ dpaa_clean_device_list(void)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..8c8f8a298d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -45,7 +45,7 @@ cleanup_fslmc_device_list(void)
 	struct rte_dpaa2_device *dev;
 	struct rte_dpaa2_device *t_dev;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
 		TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
@@ -82,7 +82,7 @@ insert_in_device_list(struct rte_dpaa2_device *newdev)
 	struct rte_dpaa2_device *dev = NULL;
 	struct rte_dpaa2_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
 		comp = compare_dpaa2_devname(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..852fcfc4dd 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -808,7 +808,8 @@ fslmc_vfio_process_group(void)
 	bool is_dpmcp_in_blocklist = false, is_dpio_in_blocklist = false;
 	int dpmcp_count = 0, dpio_count = 0, current_device;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			dpmcp_count++;
 			if (dev->device.devargs &&
@@ -825,7 +826,8 @@ fslmc_vfio_process_group(void)
 
 	/* Search the MCP as that should be initialized first. */
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			current_device++;
 			if (dev->device.devargs &&
@@ -872,7 +874,8 @@ fslmc_vfio_process_group(void)
 	}
 
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_IO)
 			current_device++;
 		if (dev->device.devargs &&
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..0186f5acde 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -28,9 +28,9 @@ struct rte_afu_device;
 struct rte_afu_driver;
 
 /** Double linked list of Intel FPGA AFU device. */
-TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
+RTE_TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
 /** Double linked list of Intel FPGA AFU device drivers. */
-TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
+RTE_TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
 
 #define IFPGA_BUS_BITSTREAM_PATH_MAX_LEN 256
 
@@ -71,7 +71,7 @@ struct rte_afu_shared {
  * A structure describing a AFU device.
  */
 struct rte_afu_device {
-	TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
+	RTE_TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
 	struct rte_device device;               /**< Inherit core device */
 	struct rte_rawdev *rawdev;    /**< Point Rawdev */
 	struct rte_afu_id id;                   /**< AFU id within FPGA. */
@@ -105,7 +105,7 @@ typedef int (afu_remove_t)(struct rte_afu_device *);
  * A structure describing a AFU device.
  */
 struct rte_afu_driver {
-	TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
+	RTE_TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
 	struct rte_driver driver;               /**< Inherit core driver. */
 	afu_probe_t *probe;                     /**< Device Probe function. */
 	afu_remove_t *remove;                   /**< Device Remove function. */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 3192e9c967..717388753d 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -2,6 +2,8 @@
  * Copyright 2018 Gaëtan Rivet
  */
 
+#include <sys/queue.h>
+
 #include <rte_bus.h>
 #include <rte_bus_pci.h>
 #include <rte_dev.h>
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..673a2850c1 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -37,16 +36,16 @@ struct rte_pci_device;
 struct rte_pci_driver;
 
 /** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+RTE_TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
 /** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+RTE_TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
 
 /* PCI Bus iterators */
 #define FOREACH_DEVICE_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
 
 struct rte_devargs;
 
@@ -64,7 +63,7 @@ enum rte_pci_kernel_driver {
  * A structure describing a PCI device.
  */
 struct rte_pci_device {
-	TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
+	RTE_TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
 	struct rte_device device;           /**< Inherit core device */
 	struct rte_pci_addr addr;           /**< PCI location. */
 	struct rte_pci_id id;               /**< PCI ID. */
@@ -160,7 +159,7 @@ typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
  * A structure describing a PCI driver.
  */
 struct rte_pci_driver {
-	TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
 	struct rte_driver driver;          /**< Inherit core driver. */
 	struct rte_pci_bus *bus;           /**< PCI bus reference. */
 	rte_pci_probe_t *probe;            /**< Device probe function. */
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index d39a7748b8..d7bd5d6e80 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -1,6 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright 2020 Mellanox Technologies, Ltd
  */
+
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/pci/windows/pci_netuio.c b/drivers/bus/pci/windows/pci_netuio.c
index 1bf9133f71..a0b175a8fc 100644
--- a/drivers/bus/pci/windows/pci_netuio.c
+++ b/drivers/bus/pci/windows/pci_netuio.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2020 Intel Corporation.
  */
 
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
index fc315d10fa..2856799953 100644
--- a/drivers/bus/vdev/rte_bus_vdev.h
+++ b/drivers/bus/vdev/rte_bus_vdev.h
@@ -15,12 +15,11 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <rte_dev.h>
 #include <rte_devargs.h>
 
 struct rte_vdev_device {
-	TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
+	RTE_TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
 	struct rte_device device;               /**< Inherit core device */
 };
 
@@ -53,7 +52,7 @@ rte_vdev_device_args(const struct rte_vdev_device *dev)
 }
 
 /** Double linked list of virtual device drivers. */
-TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
+RTE_TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
 
 /**
  * Probe function called for each virtual device driver once.
@@ -107,7 +106,7 @@ typedef int (rte_vdev_dma_unmap_t)(struct rte_vdev_device *dev, void *addr,
  * A virtual device driver abstraction.
  */
 struct rte_vdev_driver {
-	TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
 	struct rte_driver driver;        /**< Inherited general driver. */
 	rte_vdev_probe_t *probe;         /**< Virtual device probe function. */
 	rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 281a2c34e8..a8d8b2327e 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -100,7 +100,8 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 	struct vdev_custom_scan *custom_scan, *tmp_scan;
 
 	rte_spinlock_lock(&vdev_custom_scan_lock);
-	TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next, tmp_scan) {
+	RTE_TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next,
+				tmp_scan) {
 		if (custom_scan->callback != callback ||
 				(custom_scan->user_arg != (void *)-1 &&
 				custom_scan->user_arg != user_arg))
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..6bcff66468 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -20,7 +20,6 @@ extern "C" {
 #include <limits.h>
 #include <stdbool.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -38,15 +37,15 @@ struct rte_vmbus_bus;
 struct vmbus_channel;
 struct vmbus_mon_page;
 
-TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
-TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
+RTE_TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
+RTE_TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
 
 /* VMBus iterators */
 #define FOREACH_DEVICE_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
 
 /** Maximum number of VMBUS resources. */
 enum hv_uio_map {
@@ -62,7 +61,7 @@ enum hv_uio_map {
  * A structure describing a VMBUS device.
  */
 struct rte_vmbus_device {
-	TAILQ_ENTRY(rte_vmbus_device) next;    /**< Next probed VMBUS device */
+	RTE_TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
 	const struct rte_vmbus_driver *driver; /**< Associated driver */
 	struct rte_device device;              /**< Inherit core device */
 	rte_uuid_t device_id;		       /**< VMBUS device id */
@@ -93,7 +92,7 @@ typedef int (vmbus_remove_t)(struct rte_vmbus_device *);
  * A structure describing a VMBUS driver.
  */
 struct rte_vmbus_driver {
-	TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
 	struct rte_driver driver;
 	struct rte_vmbus_bus *bus;          /**< VM bus reference. */
 	vmbus_probe_t *probe;               /**< Device Probe function. */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index dbf85e4eda..ac86b70caf 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -2018,7 +2018,7 @@ bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx)
 	struct ulp_context_list_entry	*entry, *temp;
 
 	rte_spinlock_lock(&bnxt_ulp_ctxt_lock);
-	TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
 		if (entry->ulp_ctx == ulp_ctx) {
 			TAILQ_REMOVE(&ulp_cntx_list, entry, next);
 			rte_free(entry);
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 417f76bf60..65b77faae7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -157,7 +157,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	/* Destroy all bond flows from its slaves instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
-	TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
 		lret = bond_flow_destroy(dev, flow, err);
 		if (unlikely(lret != 0))
 			ret = lret;
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index 5e2b5f7c67..354f9fec20 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -180,7 +180,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
 			return ret;
 		}
 	}
-	TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
 		TAILQ_REMOVE(&PRIV(dev)->flow_list, flow, next);
 		fs_flow_release(&flow);
 	}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..6590363556 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5436,7 +5436,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* VSI has child to attach, release child first */
 	if (vsi->veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5444,7 +5444,8 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 	}
 
 	if (vsi->floating_veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head,
+			list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5452,7 +5453,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* Remove all macvlan filters of the VSI */
 	i40e_vsi_remove_all_macvlan_filter(vsi);
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		rte_free(f);
 
 	if (vsi->type != I40E_VSI_MAIN &&
@@ -6055,7 +6056,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
 	i = 0;
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		mac_filter[i] = f->mac_info;
 		ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr);
 		if (ret) {
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60..374b73e4a7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -6,6 +6,7 @@
 #define _I40E_ETHDEV_H_
 
 #include <stdint.h>
+#include <sys/queue.h>
 
 #include <rte_time.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c..e41a84f1d7 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4917,7 +4917,7 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
 		}
 
 		/* Delete FDIR flows in flow list. */
-		TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 			if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
 				TAILQ_REMOVE(&pf->flow_list, flow, node);
 			}
@@ -4972,7 +4972,7 @@ i40e_flow_flush_ethertype_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete ethertype flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_ETHERTYPE) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
@@ -5000,7 +5000,7 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete tunnel flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_TUNNEL) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfc..6579b1a00b 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -1366,7 +1366,7 @@ i40e_hash_filter_flush(struct i40e_pf *pf)
 {
 	struct rte_flow *flow, *next;
 
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
 		if (flow->filter_type != RTE_ETH_FILTER_HASH)
 			continue;
 
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index 2e34140c5b..ec24046440 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -216,7 +216,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* remove all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		vlan_num = vsi->vlan_num;
 		filter_type = f->mac_info.filter_type;
 		if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -274,7 +274,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* restore all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		if (f->mac_info.filter_type == I40E_MACVLAN_PERFECT_MATCH ||
 		    f->mac_info.filter_type == I40E_MACVLAN_HASH_MATCH) {
 			/**
@@ -563,7 +563,7 @@ rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
 	rte_ether_addr_copy(mac_addr, &vf->mac_addr);
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		if (i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr)
 				!= I40E_SUCCESS)
 			PMD_DRV_LOG(WARNING, "Delete MAC failed");
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 1fe270fb22..b86d99e57d 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -1637,7 +1637,7 @@ iavf_flow_init(struct iavf_adapter *ad)
 	TAILQ_INIT(&vf->dist_parser_list);
 	rte_spinlock_init(&vf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 				     engine->type);
@@ -1663,7 +1663,7 @@ iavf_flow_uninit(struct iavf_adapter *ad)
 	struct iavf_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1733,7 +1733,7 @@ iavf_unregister_parser(struct iavf_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -1917,7 +1917,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -1946,7 +1946,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2089,7 +2089,7 @@ iavf_flow_is_valid(struct rte_flow *flow)
 	void *temp;
 
 	if (flow && flow->engine) {
-		TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 			if (engine == flow->engine)
 				return true;
 		}
@@ -2142,7 +2142,7 @@ iavf_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
 		ret = iavf_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..629e88980d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -4,6 +4,7 @@
 
 #include <errno.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 #include <sys/types.h>
 #include <unistd.h>
 
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..fadd5f2e5a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1104,7 +1104,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (!vsi || !vsi->mac_num)
 		return -EINVAL;
 
-	TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
 		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
@@ -1115,7 +1115,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (vsi->vlan_num == 0)
 		return 0;
 
-	TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
 		ret = ice_remove_vlan_filter(vsi, &v_f->vlan_info.vlan);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 66b5743abf..3e557efe0c 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1820,7 +1820,7 @@ ice_flow_init(struct ice_adapter *ad)
 	TAILQ_INIT(&pf->dist_parser_list);
 	rte_spinlock_init(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 					engine->type);
@@ -1846,7 +1846,7 @@ ice_flow_uninit(struct ice_adapter *ad)
 	struct ice_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1946,7 +1946,7 @@ ice_unregister_parser(struct ice_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -2272,7 +2272,7 @@ ice_parse_engine_create(struct ice_adapter *ad,
 	void *meta = NULL;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		int ret;
 
 		if (parser_node->parser->parse_pattern_action(ad,
@@ -2305,7 +2305,7 @@ ice_parse_engine_validate(struct ice_adapter *ad,
 	struct ice_flow_parser_node *parser_node;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2477,7 +2477,7 @@ ice_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		ret = ice_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
@@ -2541,7 +2541,7 @@ ice_flow_redirect(struct ice_adapter *ad,
 
 	rte_spinlock_lock(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		if (!p_flow->engine->redirect)
 			continue;
 		ret = p_flow->engine->redirect(ad, p_flow, rd);
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index c702e19ea5..f5867ca055 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -1231,7 +1231,7 @@ ipn3ke_flow_flush(struct rte_eth_dev *dev,
 	struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev);
 	struct rte_flow *flow, *temp;
 
-	TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
 		TAILQ_REMOVE(&hw->flow_list, flow, next);
 		rte_free(flow);
 	}
diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c
index 27eaf380cd..7d054c38d2 100644
--- a/drivers/net/softnic/rte_eth_softnic_flow.c
+++ b/drivers/net/softnic/rte_eth_softnic_flow.c
@@ -2207,7 +2207,8 @@ pmd_flow_flush(struct rte_eth_dev *dev,
 			void *temp;
 			int status;
 
-			TAILQ_FOREACH_SAFE(flow, &table->flows, node, temp) {
+			RTE_TAILQ_FOREACH_SAFE(flow, &table->flows, node,
+				temp) {
 				/* Rule delete. */
 				status = softnic_pipeline_table_rule_delete
 						(softnic,
diff --git a/drivers/net/softnic/rte_eth_softnic_swq.c b/drivers/net/softnic/rte_eth_softnic_swq.c
index 2083d0a976..afe6f05e29 100644
--- a/drivers/net/softnic/rte_eth_softnic_swq.c
+++ b/drivers/net/softnic/rte_eth_softnic_swq.c
@@ -39,7 +39,7 @@ softnic_softnic_swq_free_keep_rxq_txq(struct pmd_internals *p)
 {
 	struct softnic_swq *swq, *tswq;
 
-	TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
+	RTE_TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
 		if ((strncmp(swq->name, "RXQ", strlen("RXQ")) == 0) ||
 			(strncmp(swq->name, "TXQ", strlen("TXQ")) == 0))
 			continue;
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..7b80370b36 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1606,7 +1606,7 @@ remove_hw_queues_from_list(struct dpaa2_dpdmai_dev *dpdmai_dev)
 
 	DPAA2_QDMA_FUNC_TRACE();
 
-	TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
+	RTE_TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
 		if (queue->dpdmai_dev == dpdmai_dev) {
 			TAILQ_REMOVE(&qdma_queue_list, queue, next);
 			rte_free(queue);
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 7017124414..3ebf62e697 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -434,7 +434,7 @@ struct rte_bbdev_callback;
 struct rte_intr_handle;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+RTE_TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
 
 /**
  * @internal The data structure associated with a device. Drivers can access
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..f86bf2260b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -879,7 +879,7 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 struct rte_cryptodev_callback;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+RTE_TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
 
 /**
  * Structure used to hold information about the callbacks to be called for a
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..9542cbf263 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -66,7 +66,7 @@ struct rte_cryptodev_global {
 
 /* Cryptodev driver, containing the driver ID */
 struct cryptodev_driver {
-	TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
 	const struct rte_driver *driver;
 	uint8_t id;
 };
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index 23aaf8b7e4..7edc6798fe 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -9,6 +9,7 @@
 #include <stdio.h>
 #include <string.h>
 #include <stdarg.h>
+#include <sys/queue.h>
 
 #include <rte_bus.h>
 #include <rte_class.h>
@@ -18,6 +19,7 @@
 #include <rte_errno.h>
 #include <rte_kvargs.h>
 #include <rte_log.h>
+#include <rte_os.h>
 #include <rte_tailq.h>
 #include "eal_private.h"
 
@@ -291,7 +293,7 @@ rte_devargs_insert(struct rte_devargs **da)
 	if (*da == NULL || (*da)->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
 		if (listed_da == *da)
 			/* devargs already in the list */
 			return 0;
@@ -358,7 +360,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
 	if (devargs == NULL || devargs->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
 		if (strcmp(d->bus->name, devargs->bus->name) == 0 &&
 		    strcmp(d->name, devargs->name) == 0) {
 			TAILQ_REMOVE(&devargs_list, d, next);
diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c
index 3a28a53247..75168ca552 100644
--- a/lib/eal/common/eal_common_fbarray.c
+++ b/lib/eal/common/eal_common_fbarray.c
@@ -9,6 +9,7 @@
 #include <errno.h>
 #include <string.h>
 #include <unistd.h>
+#include <sys/queue.h>
 
 #include <rte_common.h>
 #include <rte_eal_paging.h>
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index ec8fe23a7f..1be35f5397 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -10,6 +10,7 @@
 #include <errno.h>
 #include <regex.h>
 #include <fnmatch.h>
+#include <sys/queue.h>
 
 #include <rte_eal.h>
 #include <rte_log.h>
diff --git a/lib/eal/common/eal_common_memalloc.c b/lib/eal/common/eal_common_memalloc.c
index e872c6533b..aefdf8de3f 100644
--- a/lib/eal/common/eal_common_memalloc.c
+++ b/lib/eal/common/eal_common_memalloc.c
@@ -3,6 +3,7 @@
  */
 
 #include <string.h>
+#include <sys/queue.h>
 
 #include <rte_errno.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index ff5861b5f3..2cc74b4472 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -6,6 +6,7 @@
 #include <stdlib.h>
 #include <unistd.h>
 #include <string.h>
+#include <sys/queue.h>
 #ifndef RTE_EXEC_ENV_WINDOWS
 #include <syslog.h>
 #endif
@@ -283,7 +284,7 @@ eal_option_device_parse(void)
 	void *tmp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
 		if (ret == 0) {
 			ret = rte_devargs_add(devopt->type, devopt->arg);
 			if (ret)
diff --git a/lib/eal/common/eal_trace.h b/lib/eal/common/eal_trace.h
index 06751eb23a..76fbcd86b0 100644
--- a/lib/eal/common/eal_trace.h
+++ b/lib/eal/common/eal_trace.h
@@ -5,6 +5,8 @@
 #ifndef __EAL_TRACE_H
 #define __EAL_TRACE_H
 
+#include <sys/queue.h>
+
 #include <rte_cycles.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
index 627f0483ab..099ad3f019 100644
--- a/lib/eal/freebsd/include/rte_os.h
+++ b/lib/eal/freebsd/include/rte_os.h
@@ -11,6 +11,21 @@
  */
 
 #include <pthread_np.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define	RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
+
 
 typedef cpuset_t rte_cpuset_t;
 #define RTE_HAS_CPUSET
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index 80b154fb98..84d364df3f 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -19,13 +19,12 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_log.h>
 #include <rte_dev.h>
 
 /** Double linked list of buses */
-TAILQ_HEAD(rte_bus_list, rte_bus);
+RTE_TAILQ_HEAD(rte_bus_list, rte_bus);
 
 
 /**
@@ -250,7 +249,7 @@ typedef enum rte_iova_mode (*rte_bus_get_iommu_class_t)(void);
  * A structure describing a generic bus.
  */
 struct rte_bus {
-	TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
+	RTE_TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
 	const char *name;            /**< Name of the bus */
 	rte_bus_scan_t scan;         /**< Scan for devices attached to bus */
 	rte_bus_probe_t probe;       /**< Probe devices on bus */
diff --git a/lib/eal/include/rte_class.h b/lib/eal/include/rte_class.h
index 856d09b22d..d560339652 100644
--- a/lib/eal/include/rte_class.h
+++ b/lib/eal/include/rte_class.h
@@ -22,18 +22,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
-
 #include <rte_dev.h>
 
 /** Double linked list of classes */
-TAILQ_HEAD(rte_class_list, rte_class);
+RTE_TAILQ_HEAD(rte_class_list, rte_class);
 
 /**
  * A structure describing a generic device class.
  */
 struct rte_class {
-	TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
+	RTE_TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
 	const char *name; /**< Name of the class */
 	rte_dev_iterate_t dev_iterate; /**< Device iterator. */
 };
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6dd72c11a1..f6efe0c94e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -18,7 +18,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_compat.h>
@@ -75,7 +74,7 @@ struct rte_mem_resource {
  * A structure describing a device driver.
  */
 struct rte_driver {
-	TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
 	const char *name;                   /**< Driver name. */
 	const char *alias;              /**< Driver alias. */
 };
@@ -90,7 +89,7 @@ struct rte_driver {
  * A structure describing a generic device.
  */
 struct rte_device {
-	TAILQ_ENTRY(rte_device) next; /**< Next device */
+	RTE_TAILQ_ENTRY(rte_device) next; /**< Next device */
 	const char *name;             /**< Device name */
 	const struct rte_driver *driver; /**< Driver assigned after probing */
 	const struct rte_bus *bus;    /**< Bus handle assigned on scan */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index cd90944fe8..957477b398 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -21,7 +21,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 #include <rte_compat.h>
 #include <rte_bus.h>
 
@@ -76,7 +75,7 @@ enum rte_devtype {
  */
 struct rte_devargs {
 	/** Next in list. */
-	TAILQ_ENTRY(rte_devargs) next;
+	RTE_TAILQ_ENTRY(rte_devargs) next;
 	/** Type of device. */
 	enum rte_devtype type;
 	/** Device policy. */
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index b706bb8710..bb3523467b 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -21,7 +21,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdarg.h>
 #include <stdbool.h>
-#include <sys/queue.h>
 
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/eal/include/rte_service.h b/lib/eal/include/rte_service.h
index c7d037d862..1c9275c32a 100644
--- a/lib/eal/include/rte_service.h
+++ b/lib/eal/include/rte_service.h
@@ -29,7 +29,6 @@ extern "C" {
 
 #include<stdio.h>
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index b6fe4e5f78..28cd54ef3e 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -15,17 +15,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <stdio.h>
 #include <rte_debug.h>
 
 /** dummy structure type used by the rte_tailq APIs */
 struct rte_tailq_entry {
-	TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
+	RTE_TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
 	void *data; /**< Pointer to the data referenced by this tailq entry */
 };
 /** dummy */
-TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
+RTE_TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
 
 #define RTE_TAILQ_NAMESIZE 32
 
@@ -48,7 +47,7 @@ struct rte_tailq_elem {
 	 * rte_eal_tailqs_init()
 	 */
 	struct rte_tailq_head *head;
-	TAILQ_ENTRY(rte_tailq_elem) next;
+	RTE_TAILQ_ENTRY(rte_tailq_elem) next;
 	const char name[RTE_TAILQ_NAMESIZE];
 };
 
@@ -126,10 +125,10 @@ RTE_INIT(tailqinitfn_ ##t) \
 }
 
 /* This macro permits both remove and free var within the loop safely.*/
-#ifndef TAILQ_FOREACH_SAFE
-#define TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
-	for ((var) = TAILQ_FIRST((head));			\
-	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1);	\
+#ifndef RTE_TAILQ_FOREACH_SAFE
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
+	for ((var) = RTE_TAILQ_FIRST((head));			\
+	    (var) && ((tvar) = RTE_TAILQ_NEXT((var), field), 1);	\
 	    (var) = (tvar))
 #endif
 
diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
index 1618b4df22..1a6e5b789f 100644
--- a/lib/eal/linux/include/rte_os.h
+++ b/lib/eal/linux/include/rte_os.h
@@ -11,6 +11,21 @@
  */
 
 #include <sched.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define	RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
+
 
 #ifdef CPU_SETSIZE /* may require _GNU_SOURCE */
 typedef cpu_set_t rte_cpuset_t;
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index e5dc54efb8..103c1f909d 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -4,6 +4,7 @@
 
 #include <stdatomic.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 
 #include <rte_alarm.h>
 #include <rte_spinlock.h>
diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
index 66c711d458..ee7a8c7a08 100644
--- a/lib/eal/windows/include/rte_os.h
+++ b/lib/eal/windows/include/rte_os.h
@@ -18,6 +18,37 @@
 extern "C" {
 #endif
 
+#define	RTE_TAILQ_HEAD(name, type) \
+struct name { \
+	struct type *tqh_first;	/* first element */ \
+	struct type **tqh_last;	/* addr of last next element */	\
+}
+#define	RTE_TAILQ_ENTRY(type) \
+struct { \
+	struct type *tqe_next;	/* next element */ \
+	struct type **tqe_prev;	/* address of previous next element */ \
+}
+#define	RTE_TAILQ_FOREACH(var, head, field) \
+	for ((var) = RTE_TAILQ_FIRST((head)); \
+	    (var); \
+	    (var) = RTE_TAILQ_NEXT((var), field))
+#define	RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define	RTE_TAILQ_FIRST(head)	((head)->tqh_first)
+#define	RTE_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next)
+#define	RTE_STAILQ_HEAD(name, type) \
+struct name { \
+	struct type *stqh_first;/* first element */ \
+	struct type **stqh_last;/* addr of last next element */ \
+}
+#define	RTE_STAILQ_ENTRY(type) \
+struct { \
+	struct type *stqe_next;	/* next element */ \
+}
+
+
 /* cpu_set macros implementation */
 #define RTE_CPU_AND(dst, src1, src2) CPU_AND(dst, src1, src2)
 #define RTE_CPU_OR(dst, src1, src2) CPU_OR(dst, src1, src2)
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 77f46809f8..5bf517fee9 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -759,7 +759,7 @@ rte_efd_free(struct rte_efd_table *table)
 	efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list);
 	rte_mcfg_tailq_write_lock();
 
-	TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
 		if (te->data == (void *) table) {
 			TAILQ_REMOVE(efd_list, te, next);
 			rte_free(te);
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..d2c9ec42c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -21,7 +21,7 @@
 
 struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
+RTE_TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
 struct rte_eth_dev;
 
diff --git a/lib/hash/rte_fbk_hash.h b/lib/hash/rte_fbk_hash.h
index c4d6976d2b..9c3a61c1d6 100644
--- a/lib/hash/rte_fbk_hash.h
+++ b/lib/hash/rte_fbk_hash.h
@@ -17,7 +17,6 @@
 
 #include <stdint.h>
 #include <errno.h>
-#include <sys/queue.h>
 
 #ifdef __cplusplus
 extern "C" {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index d5a95a6e00..696a1121e2 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2021 Intel Corporation
  */
 
+#include <sys/queue.h>
+
 #include <rte_thash.h>
 #include <rte_tailq.h>
 #include <rte_random.h>
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index 0bfe64b14e..80f931c32a 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -62,7 +62,7 @@ struct ip_frag_key {
  * First two entries in the frags[] array are for the last and first fragments.
  */
 struct ip_frag_pkt {
-	TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
+	RTE_TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
 	struct ip_frag_key key;           /**< fragmentation key */
 	uint64_t             start;       /**< creation timestamp */
 	uint32_t             total_size;  /**< expected reassembled size */
@@ -83,7 +83,7 @@ struct rte_ip_frag_death_row {
 	/**< mbufs to be freed */
 };
 
-TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
+RTE_TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
 
 /** fragmentation table statistics */
 struct ip_frag_tbl_stat {
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 59a588425b..c5f859ae71 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1337,7 +1337,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 
 	rte_mcfg_mempool_read_lock();
 
-	TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
+	RTE_TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
 		(*func)((struct rte_mempool *) te->data, arg);
 	}
 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4235d6f0bf..f57ecbd6fc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -38,7 +38,6 @@
 #include <stdint.h>
 #include <errno.h>
 #include <inttypes.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_spinlock.h>
@@ -141,7 +140,7 @@ struct rte_mempool_objsz {
  * double-frees.
  */
 struct rte_mempool_objhdr {
-	STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;          /**< The mempool owning the object. */
 	rte_iova_t iova;                 /**< IO address of the object. */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -152,7 +151,7 @@ struct rte_mempool_objhdr {
 /**
  * A list of object headers type
  */
-STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
+RTE_STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 
@@ -171,7 +170,7 @@ struct rte_mempool_objtlr {
 /**
  * A list of memory where objects are stored
  */
-STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
+RTE_STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
 
 /**
  * Callback used to free a memory chunk
@@ -186,7 +185,7 @@ typedef void (rte_mempool_memchunk_free_cb_t)(struct rte_mempool_memhdr *memhdr,
  * and physically contiguous.
  */
 struct rte_mempool_memhdr {
-	STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;  /**< The mempool owning the chunk */
 	void *addr;              /**< Virtual address of the chunk */
 	rte_iova_t iova;         /**< IO address of the chunk */
diff --git a/lib/pci/rte_pci.h b/lib/pci/rte_pci.h
index 1f33d687f4..71cbd441c7 100644
--- a/lib/pci/rte_pci.h
+++ b/lib/pci/rte_pci.h
@@ -18,7 +18,6 @@ extern "C" {
 
 #include <stdio.h>
 #include <limits.h>
-#include <sys/queue.h>
 #include <inttypes.h>
 #include <sys/types.h>
 
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 16718ca7f1..43ce1a29d4 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -26,7 +26,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdint.h>
 #include <string.h>
-#include <sys/queue.h>
 #include <errno.h>
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h
index e23f2304c6..f93e5f3f95 100644
--- a/lib/table/rte_swx_table.h
+++ b/lib/table/rte_swx_table.h
@@ -16,7 +16,8 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
+
+#include <rte_os.h>
 
 /** Match type. */
 enum rte_swx_table_match_type {
@@ -68,7 +69,7 @@ struct rte_swx_table_entry {
 	/** Used to facilitate the membership of this table entry to a
 	 * linked list.
 	 */
-	TAILQ_ENTRY(rte_swx_table_entry) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_entry) node;
 
 	/** Key value for the current entry. Array of *key_size* bytes or NULL
 	 * if the *key_size* for the current table is 0.
@@ -111,7 +112,7 @@ struct rte_swx_table_entry {
 };
 
 /** List of table entries. */
-TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
+RTE_TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
 
 /**
  * Table memory footprint get
diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h
index 71b6a74810..62988d2856 100644
--- a/lib/table/rte_swx_table_selector.h
+++ b/lib/table/rte_swx_table_selector.h
@@ -16,7 +16,6 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_compat.h>
 
@@ -56,7 +55,7 @@ struct rte_swx_table_selector_params {
 /** Group member parameters. */
 struct rte_swx_table_selector_member {
 	/** Linked list connectivity. */
-	TAILQ_ENTRY(rte_swx_table_selector_member) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_selector_member) node;
 
 	/** Member ID. */
 	uint32_t member_id;
@@ -66,7 +65,7 @@ struct rte_swx_table_selector_member {
 };
 
 /** List of group members. */
-TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
+RTE_TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
 
 /** Group parameters. */
 struct rte_swx_table_selector_group {
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e0b67721b6..e4a445e709 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -32,7 +32,7 @@ vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_pending_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -100,7 +100,8 @@ vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next,
+				temp_node) {
 		if (node->iova < iova)
 			continue;
 		if (node->iova >= iova + size)
@@ -121,7 +122,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -141,7 +142,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
 
 	entry_idx = rte_rand() % vq->iotlb_cache_nr;
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		if (!entry_idx) {
 			TAILQ_REMOVE(&vq->iotlb_list, node, next);
 			rte_mempool_put(vq->iotlb_pool, node);
@@ -218,7 +219,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		/* Sorted list */
 		if (unlikely(iova + size < node->iova))
 			break;
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h
index bfada387b0..b0f494815f 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/rte_vdpa_dev.h
@@ -71,7 +71,7 @@ struct rte_vdpa_dev_ops {
  * vdpa device structure includes device address and device operations.
  */
 struct rte_vdpa_device {
-	TAILQ_ENTRY(rte_vdpa_device) next;
+	RTE_TAILQ_ENTRY(rte_vdpa_device) next;
 	/** Generic device information */
 	struct rte_device *device;
 	/** vdpa device operations */
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 99a926a772..6dd91859ac 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -115,7 +115,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 	int ret = -1;
 
 	rte_spinlock_lock(&vdpa_device_list_lock);
-	TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
+	RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
 		if (dev != cur_dev)
 			continue;
 
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16 check
  @ 2021-08-13 16:51  4% ` Nicolas Chautru
  0 siblings, 0 replies; 200+ results
From: Nicolas Chautru @ 2021-08-13 16:51 UTC (permalink / raw)
  To: dev, gakhil; +Cc: thomas, trix, hemant.agrawal, mingshan.zhang, Nicolas Chautru

Adding a missing operation when CRC16
is being used for TB CRC check.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 app/test-bbdev/test_bbdev_vector.c     |  2 ++
 doc/guides/prog_guide/bbdev.rst        |  3 +++
 doc/guides/rel_notes/release_21_11.rst |  1 +
 lib/bbdev/rte_bbdev_op.h               | 34 ++++++++++++++++++----------------
 4 files changed, 24 insertions(+), 16 deletions(-)

diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
index 614dbd1..8d796b1 100644
--- a/app/test-bbdev/test_bbdev_vector.c
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -167,6 +167,8 @@
 		*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK;
 	else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP"))
 		*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP;
+	else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK"))
+		*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK;
 	else if (!strcmp(token, "RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS"))
 		*op_flag_value = RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS;
 	else if (!strcmp(token, "RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE"))
diff --git a/doc/guides/prog_guide/bbdev.rst b/doc/guides/prog_guide/bbdev.rst
index 9619280..8bd7cba 100644
--- a/doc/guides/prog_guide/bbdev.rst
+++ b/doc/guides/prog_guide/bbdev.rst
@@ -891,6 +891,9 @@ given below.
 |RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP                                    |
 | Set to drop the last CRC bits decoding output                      |
 +--------------------------------------------------------------------+
+|RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK                                    |
+| Set for code block CRC-16 checking                                 |
++--------------------------------------------------------------------+
 |RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS                                 |
 | Set for bit-level de-interleaver bypass on input stream            |
 +--------------------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a55..69dd518 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -84,6 +84,7 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* bbdev: Added capability related to more comprehensive CRC options.
 
 ABI Changes
 -----------
diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
index f946842..7c44ddd 100644
--- a/lib/bbdev/rte_bbdev_op.h
+++ b/lib/bbdev/rte_bbdev_op.h
@@ -142,51 +142,53 @@ enum rte_bbdev_op_ldpcdec_flag_bitmasks {
 	RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK = (1ULL << 1),
 	/** Set to drop the last CRC bits decoding output */
 	RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP = (1ULL << 2),
+	/** Set for transport block CRC-16 checking */
+	RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK = (1ULL << 3),
 	/** Set for bit-level de-interleaver bypass on Rx stream. */
-	RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 3),
+	RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 4),
 	/** Set for HARQ combined input stream enable. */
-	RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 4),
+	RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 5),
 	/** Set for HARQ combined output stream enable. */
-	RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 5),
+	RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 6),
 	/** Set for LDPC decoder bypass.
 	 *  RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE must be set.
 	 */
-	RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 6),
+	RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 7),
 	/** Set for soft-output stream enable */
-	RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 7),
+	RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 8),
 	/** Set for Rate-Matching bypass on soft-out stream. */
-	RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 8),
+	RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 9),
 	/** Set for bit-level de-interleaver bypass on soft-output stream. */
-	RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 9),
+	RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 10),
 	/** Set for iteration stopping on successful decode condition
 	 *  i.e. a successful syndrome check.
 	 */
-	RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 10),
+	RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 11),
 	/** Set if a device supports decoder dequeue interrupts. */
-	RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 11),
+	RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 12),
 	/** Set if a device supports scatter-gather functionality. */
-	RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 12),
+	RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 13),
 	/** Set if a device supports input/output HARQ compression. */
-	RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 13),
+	RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 14),
 	/** Set if a device supports input LLR compression. */
-	RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 14),
+	RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 15),
 	/** Set if a device supports HARQ input from
 	 *  device's internal memory.
 	 */
-	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 15),
+	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 16),
 	/** Set if a device supports HARQ output to
 	 *  device's internal memory.
 	 */
-	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 16),
+	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 17),
 	/** Set if a device supports loop-back access to
 	 *  HARQ internal memory. Intended for troubleshooting.
 	 */
-	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 17),
+	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 18),
 	/** Set if a device includes LLR filler bits in the circular buffer
 	 *  for HARQ memory. If not set, it is assumed the filler bits are not
 	 *  in HARQ memory and handled directly by the LDPC decoder.
 	 */
-	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 18)
+	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 19)
 };
 
 /** Flags for LDPC encoder operation and capability structure */
-- 
1.8.3.1


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCHv5] eal: remove sys/queue.h from public headers.
  2021-08-13  3:36  1%     ` [dpdk-dev] [PATCHv5] " William Tu
@ 2021-08-13 18:59  0%       ` Dmitry Kozlyuk
  2021-08-14  2:51  1%       ` [dpdk-dev] [PATCH v6] " William Tu
  1 sibling, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-08-13 18:59 UTC (permalink / raw)
  To: William Tu; +Cc: dev, nick.connolly, stephen

2021-08-13 03:36 (UTC+0000), William Tu:
> Currently there are some public headers that include 'sys/queue.h', which
> is not POSIX, but usually provided by the Linux/BSD system library.
> (Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
> The file is missing on Windows. During the windows build, DPDK uses a

Typo: "Windows".

> bundled copy, so building a DPDK library works fine.  But when OVS or other
> applications use DPDK as a library, because some DPDK public headers
> include 'sys/queue.h', on Windows, it triggers an error due to no such file.
> 
> One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
> Windows environment, such as [1]. However, this means DPDK exports the
> functionalities of 'sys/queue.h' into the environment, which might cause
> symbols, macros, headers clashing with other applications.
> 
> The patch fixes it by removing the "#include <sys/queue.h>" from
> DPDK public headers, so programs including DPDK headers don't depend
> on the system to provide 'sys/queue.h'. When these public headers use
> macros such as TAILQ_xxx, we replace it with RTE_ prefix.

"replace it by _the ones_ with RTE_ prefix"?

> For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
> under windows. Note that these RTE_ macros are compatible with

"under windows" -> "in Windows EAL"

> <sys/queue.h>, only at the level of API (to use with <sys/queue.h>

"only" -> "both"

> macros in C files) and ABI (to avoid breaking it).
> 
> Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
> the patch replaces it with RTE_TAILQ_FOREACH_SAFE.

> With this patch, all the public headers no longer have
> "#include <sys/queue.h>" or "TAILQ_xxx" macros.

This is a repetition of what is stated in the previous paragraph.

> 
> [1] http://mails.dpdk.org/archives/dev/2021-August/216304.html
> 
> Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
> Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
> Signed-off-by: William Tu <u9012063@gmail.com>
> ---
> v4-v5
> * fix compile error due to drivers/net/ipn3ke/ipn3ke_flow.c:1234
> * run spell check

1. Please register at http://patchwork.dpdk.org with the email used for the
patches and update the state of all previous versions to "Superseded".
It is not currently done automatically and only you and a few maintainers
can change the state.

Patchwork also shows CI build failures with v5, they need to be fixed.

2. Are you using `git format-patch -v5 ...` to create patches?
The subject of your patches is missing a space ("PATCH v5" vs "PATCHv5").
Not sure if tools like patchwork will properly process it.

[...]
>  struct rte_afu_driver {
> -	TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
> +	RTE_TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
>  	struct rte_driver driver;               /**< Inherit core driver. */
>  	afu_probe_t *probe;                     /**< Device Probe function. */
>  	afu_remove_t *remove;                   /**< Device Remove function. */

Re: loss of comment alignment here and in other places.
Firstly, it's definitely not a big deal. Current patch is good because it only
changes relevant lines. Re-aligning all the comments would be worse IMO.
However, in cases like this, when keeping alignment doesn't require changing
neighboring lines, it could be kept. Just a nit.

[...]
>  /* This macro permits both remove and free var within the loop safely.*/
> -#ifndef TAILQ_FOREACH_SAFE
> -#define TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
> -	for ((var) = TAILQ_FIRST((head));			\
> -	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1);	\
> +#ifndef RTE_TAILQ_FOREACH_SAFE
> +#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
> +	for ((var) = RTE_TAILQ_FIRST((head));			\
> +	    (var) && ((tvar) = RTE_TAILQ_NEXT((var), field), 1);	\
>  	    (var) = (tvar))
>  #endif

Why duplicate this in rte_os.h (documentation lost, BTW) and add #ifdef?
RTE_TAILQ_FOREACH_SAFE is not needed in headers, it can be left here.

>  
> diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
> index 1618b4df22..1a6e5b789f 100644
> --- a/lib/eal/linux/include/rte_os.h
> +++ b/lib/eal/linux/include/rte_os.h
> @@ -11,6 +11,21 @@
>   */
>  
>  #include <sched.h>
> +#include <sys/queue.h>
> +
> +/* These macros are compatible with system's sys/queue.h. */
> +#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
> +#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
> +#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
> +#define	RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \

Stray TAB here and in rte_os.h for other platforms.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v6] eal: remove sys/queue.h from public headers.
  2021-08-13  3:36  1%     ` [dpdk-dev] [PATCHv5] " William Tu
  2021-08-13 18:59  0%       ` Dmitry Kozlyuk
@ 2021-08-14  2:51  1%       ` William Tu
  2021-08-18 23:26  1%         ` [dpdk-dev] [PATCH v7] " William Tu
  1 sibling, 1 reply; 200+ results
From: William Tu @ 2021-08-14  2:51 UTC (permalink / raw)
  To: dev; +Cc: Dmitry.Kozliuk, nick.connolly

Currently there are some public headers that include 'sys/queue.h', which
is not POSIX, but usually provided by the Linux/BSD system library.
(Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
The file is missing on Windows. During the Windows build, DPDK uses a
bundled copy, so building a DPDK library works fine.  But when OVS or other
applications use DPDK as a library, because some DPDK public headers
include 'sys/queue.h', on Windows, it triggers an error due to no such
file.

One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
Windows environment, such as [1]. However, this means DPDK exports the
functionalities of 'sys/queue.h' into the environment, which might cause
symbols, macros, headers clashing with other applications.

The patch fixes it by removing the "#include <sys/queue.h>" from
DPDK public headers, so programs including DPDK headers don't depend
on the system to provide 'sys/queue.h'. When these public headers use
macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
in Windows EAL. Note that these RTE_ macros are compatible with
<sys/queue.h>, both at the level of API (to use with <sys/queue.h>
macros in C files) and ABI (to avoid breaking it).

Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
the patch replaces it with RTE_TAILQ_FOREACH_SAFE.

[1] http://mails.dpdk.org/archives/dev/2021-August/216304.html

Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Signed-off-by: William Tu <u9012063@gmail.com>
---
v5-v6:
* fix tab/indent issue, fix type and spelling
* fix duplicate RTE_TAILQ_FOREACH_SAFE
* fix build error due to drivers/net/mlx5/mlx5_flow_meter.c
---
 drivers/bus/auxiliary/private.h            |  1 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h  |  5 ++--
 drivers/bus/dpaa/dpaa_bus.c                |  4 +--
 drivers/bus/fslmc/fslmc_bus.c              |  4 +--
 drivers/bus/fslmc/fslmc_vfio.c             |  9 ++++---
 drivers/bus/ifpga/rte_bus_ifpga.h          |  8 +++---
 drivers/bus/pci/pci_params.c               |  2 ++
 drivers/bus/pci/rte_bus_pci.h              | 13 +++++----
 drivers/bus/pci/windows/pci.c              |  3 +++
 drivers/bus/pci/windows/pci_netuio.c       |  2 ++
 drivers/bus/vdev/rte_bus_vdev.h            |  7 +++--
 drivers/bus/vdev/vdev.c                    |  3 ++-
 drivers/bus/vmbus/rte_bus_vmbus.h          | 13 +++++----
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c         |  2 +-
 drivers/net/bonding/rte_eth_bond_flow.c    |  2 +-
 drivers/net/failsafe/failsafe_flow.c       |  2 +-
 drivers/net/i40e/i40e_ethdev.c             |  9 ++++---
 drivers/net/i40e/i40e_ethdev.h             |  1 +
 drivers/net/i40e/i40e_flow.c               |  6 ++---
 drivers/net/i40e/i40e_hash.c               |  2 +-
 drivers/net/i40e/rte_pmd_i40e.c            |  6 ++---
 drivers/net/iavf/iavf_generic_flow.c       | 14 +++++-----
 drivers/net/ice/ice_dcf_ethdev.c           |  1 +
 drivers/net/ice/ice_ethdev.c               |  4 +--
 drivers/net/ice/ice_generic_flow.c         | 14 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c           |  2 +-
 drivers/net/mlx5/mlx5_flow_dv.c            |  2 +-
 drivers/net/mlx5/mlx5_flow_meter.c         |  2 +-
 drivers/net/softnic/rte_eth_softnic_flow.c |  3 ++-
 drivers/net/softnic/rte_eth_softnic_swq.c  |  2 +-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c        |  2 +-
 lib/bbdev/rte_bbdev.h                      |  2 +-
 lib/cryptodev/rte_cryptodev.h              |  2 +-
 lib/cryptodev/rte_cryptodev_pmd.h          |  2 +-
 lib/eal/common/eal_common_devargs.c        |  6 +++--
 lib/eal/common/eal_common_fbarray.c        |  1 +
 lib/eal/common/eal_common_log.c            |  1 +
 lib/eal/common/eal_common_memalloc.c       |  1 +
 lib/eal/common/eal_common_options.c        |  3 ++-
 lib/eal/common/eal_trace.h                 |  2 ++
 lib/eal/freebsd/include/rte_os.h           | 15 +++++++++++
 lib/eal/include/rte_bus.h                  |  5 ++--
 lib/eal/include/rte_class.h                |  6 ++---
 lib/eal/include/rte_dev.h                  |  5 ++--
 lib/eal/include/rte_devargs.h              |  3 +--
 lib/eal/include/rte_log.h                  |  1 -
 lib/eal/include/rte_service.h              |  1 -
 lib/eal/include/rte_tailq.h                | 15 +++--------
 lib/eal/linux/include/rte_os.h             | 15 +++++++++++
 lib/eal/windows/eal_alarm.c                |  1 +
 lib/eal/windows/include/rte_os.h           | 31 ++++++++++++++++++++++
 lib/efd/rte_efd.c                          |  2 +-
 lib/ethdev/rte_ethdev_core.h               |  2 +-
 lib/hash/rte_fbk_hash.h                    |  1 -
 lib/hash/rte_thash.c                       |  2 ++
 lib/ip_frag/rte_ip_frag.h                  |  4 +--
 lib/mempool/rte_mempool.c                  |  2 +-
 lib/mempool/rte_mempool.h                  |  9 +++----
 lib/pci/rte_pci.h                          |  1 -
 lib/ring/rte_ring_core.h                   |  1 -
 lib/table/rte_swx_table.h                  |  7 ++---
 lib/table/rte_swx_table_selector.h         |  5 ++--
 lib/vhost/iotlb.c                          | 11 ++++----
 lib/vhost/rte_vdpa_dev.h                   |  2 +-
 lib/vhost/vdpa.c                           |  2 +-
 65 files changed, 192 insertions(+), 127 deletions(-)

diff --git a/drivers/bus/auxiliary/private.h b/drivers/bus/auxiliary/private.h
index 9987e8b501..d22e83cf7a 100644
--- a/drivers/bus/auxiliary/private.h
+++ b/drivers/bus/auxiliary/private.h
@@ -7,6 +7,7 @@
 
 #include <stdbool.h>
 #include <stdio.h>
+#include <sys/queue.h>
 
 #include "rte_bus_auxiliary.h"
 
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..b1f5610404 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -113,7 +112,7 @@ typedef int (rte_auxiliary_dma_unmap_t)(struct rte_auxiliary_device *dev,
  * A structure describing an auxiliary device.
  */
 struct rte_auxiliary_device {
-	TAILQ_ENTRY(rte_auxiliary_device) next;   /**< Next probed device. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
 	struct rte_device device;                 /**< Inherit core device */
 	char name[RTE_DEV_NAME_MAX_LEN + 1];      /**< ASCII device name */
 	struct rte_intr_handle intr_handle;       /**< Interrupt handle */
@@ -124,7 +123,7 @@ struct rte_auxiliary_device {
  * A structure describing an auxiliary driver.
  */
 struct rte_auxiliary_driver {
-	TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
 	struct rte_driver driver;             /**< Inherit core driver. */
 	struct rte_auxiliary_bus *bus;        /**< Auxiliary bus reference. */
 	rte_auxiliary_match_t *match;         /**< Device match function. */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..6cab2ae760 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -105,7 +105,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		comp = compare_dpaa_devices(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -245,7 +245,7 @@ dpaa_clean_device_list(void)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..8c8f8a298d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -45,7 +45,7 @@ cleanup_fslmc_device_list(void)
 	struct rte_dpaa2_device *dev;
 	struct rte_dpaa2_device *t_dev;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
 		TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
@@ -82,7 +82,7 @@ insert_in_device_list(struct rte_dpaa2_device *newdev)
 	struct rte_dpaa2_device *dev = NULL;
 	struct rte_dpaa2_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
 		comp = compare_dpaa2_devname(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..852fcfc4dd 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -808,7 +808,8 @@ fslmc_vfio_process_group(void)
 	bool is_dpmcp_in_blocklist = false, is_dpio_in_blocklist = false;
 	int dpmcp_count = 0, dpio_count = 0, current_device;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			dpmcp_count++;
 			if (dev->device.devargs &&
@@ -825,7 +826,8 @@ fslmc_vfio_process_group(void)
 
 	/* Search the MCP as that should be initialized first. */
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			current_device++;
 			if (dev->device.devargs &&
@@ -872,7 +874,8 @@ fslmc_vfio_process_group(void)
 	}
 
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_IO)
 			current_device++;
 		if (dev->device.devargs &&
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..a85e90d384 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -28,9 +28,9 @@ struct rte_afu_device;
 struct rte_afu_driver;
 
 /** Double linked list of Intel FPGA AFU device. */
-TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
+RTE_TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
 /** Double linked list of Intel FPGA AFU device drivers. */
-TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
+RTE_TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
 
 #define IFPGA_BUS_BITSTREAM_PATH_MAX_LEN 256
 
@@ -71,7 +71,7 @@ struct rte_afu_shared {
  * A structure describing a AFU device.
  */
 struct rte_afu_device {
-	TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
+	RTE_TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
 	struct rte_device device;               /**< Inherit core device */
 	struct rte_rawdev *rawdev;    /**< Point Rawdev */
 	struct rte_afu_id id;                   /**< AFU id within FPGA. */
@@ -105,7 +105,7 @@ typedef int (afu_remove_t)(struct rte_afu_device *);
  * A structure describing a AFU device.
  */
 struct rte_afu_driver {
-	TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
+	RTE_TAILQ_ENTRY(rte_afu_driver) next;   /**< Next afu driver. */
 	struct rte_driver driver;               /**< Inherit core driver. */
 	afu_probe_t *probe;                     /**< Device Probe function. */
 	afu_remove_t *remove;                   /**< Device Remove function. */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 3192e9c967..717388753d 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -2,6 +2,8 @@
  * Copyright 2018 Gaëtan Rivet
  */
 
+#include <sys/queue.h>
+
 #include <rte_bus.h>
 #include <rte_bus_pci.h>
 #include <rte_dev.h>
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..673a2850c1 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -37,16 +36,16 @@ struct rte_pci_device;
 struct rte_pci_driver;
 
 /** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+RTE_TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
 /** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+RTE_TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
 
 /* PCI Bus iterators */
 #define FOREACH_DEVICE_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
 
 struct rte_devargs;
 
@@ -64,7 +63,7 @@ enum rte_pci_kernel_driver {
  * A structure describing a PCI device.
  */
 struct rte_pci_device {
-	TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
+	RTE_TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
 	struct rte_device device;           /**< Inherit core device */
 	struct rte_pci_addr addr;           /**< PCI location. */
 	struct rte_pci_id id;               /**< PCI ID. */
@@ -160,7 +159,7 @@ typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
  * A structure describing a PCI driver.
  */
 struct rte_pci_driver {
-	TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
 	struct rte_driver driver;          /**< Inherit core driver. */
 	struct rte_pci_bus *bus;           /**< PCI bus reference. */
 	rte_pci_probe_t *probe;            /**< Device probe function. */
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index d39a7748b8..d7bd5d6e80 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -1,6 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright 2020 Mellanox Technologies, Ltd
  */
+
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/pci/windows/pci_netuio.c b/drivers/bus/pci/windows/pci_netuio.c
index 1bf9133f71..a0b175a8fc 100644
--- a/drivers/bus/pci/windows/pci_netuio.c
+++ b/drivers/bus/pci/windows/pci_netuio.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2020 Intel Corporation.
  */
 
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
index fc315d10fa..2856799953 100644
--- a/drivers/bus/vdev/rte_bus_vdev.h
+++ b/drivers/bus/vdev/rte_bus_vdev.h
@@ -15,12 +15,11 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <rte_dev.h>
 #include <rte_devargs.h>
 
 struct rte_vdev_device {
-	TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
+	RTE_TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
 	struct rte_device device;               /**< Inherit core device */
 };
 
@@ -53,7 +52,7 @@ rte_vdev_device_args(const struct rte_vdev_device *dev)
 }
 
 /** Double linked list of virtual device drivers. */
-TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
+RTE_TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
 
 /**
  * Probe function called for each virtual device driver once.
@@ -107,7 +106,7 @@ typedef int (rte_vdev_dma_unmap_t)(struct rte_vdev_device *dev, void *addr,
  * A virtual device driver abstraction.
  */
 struct rte_vdev_driver {
-	TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
 	struct rte_driver driver;        /**< Inherited general driver. */
 	rte_vdev_probe_t *probe;         /**< Virtual device probe function. */
 	rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 281a2c34e8..a8d8b2327e 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -100,7 +100,8 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 	struct vdev_custom_scan *custom_scan, *tmp_scan;
 
 	rte_spinlock_lock(&vdev_custom_scan_lock);
-	TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next, tmp_scan) {
+	RTE_TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next,
+				tmp_scan) {
 		if (custom_scan->callback != callback ||
 				(custom_scan->user_arg != (void *)-1 &&
 				custom_scan->user_arg != user_arg))
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..6bcff66468 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -20,7 +20,6 @@ extern "C" {
 #include <limits.h>
 #include <stdbool.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -38,15 +37,15 @@ struct rte_vmbus_bus;
 struct vmbus_channel;
 struct vmbus_mon_page;
 
-TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
-TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
+RTE_TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
+RTE_TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
 
 /* VMBus iterators */
 #define FOREACH_DEVICE_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
 
 /** Maximum number of VMBUS resources. */
 enum hv_uio_map {
@@ -62,7 +61,7 @@ enum hv_uio_map {
  * A structure describing a VMBUS device.
  */
 struct rte_vmbus_device {
-	TAILQ_ENTRY(rte_vmbus_device) next;    /**< Next probed VMBUS device */
+	RTE_TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
 	const struct rte_vmbus_driver *driver; /**< Associated driver */
 	struct rte_device device;              /**< Inherit core device */
 	rte_uuid_t device_id;		       /**< VMBUS device id */
@@ -93,7 +92,7 @@ typedef int (vmbus_remove_t)(struct rte_vmbus_device *);
  * A structure describing a VMBUS driver.
  */
 struct rte_vmbus_driver {
-	TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
 	struct rte_driver driver;
 	struct rte_vmbus_bus *bus;          /**< VM bus reference. */
 	vmbus_probe_t *probe;               /**< Device Probe function. */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index dbf85e4eda..ac86b70caf 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -2018,7 +2018,7 @@ bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx)
 	struct ulp_context_list_entry	*entry, *temp;
 
 	rte_spinlock_lock(&bnxt_ulp_ctxt_lock);
-	TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
 		if (entry->ulp_ctx == ulp_ctx) {
 			TAILQ_REMOVE(&ulp_cntx_list, entry, next);
 			rte_free(entry);
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 417f76bf60..65b77faae7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -157,7 +157,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	/* Destroy all bond flows from its slaves instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
-	TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
 		lret = bond_flow_destroy(dev, flow, err);
 		if (unlikely(lret != 0))
 			ret = lret;
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index 5e2b5f7c67..354f9fec20 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -180,7 +180,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
 			return ret;
 		}
 	}
-	TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
 		TAILQ_REMOVE(&PRIV(dev)->flow_list, flow, next);
 		fs_flow_release(&flow);
 	}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..6590363556 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5436,7 +5436,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* VSI has child to attach, release child first */
 	if (vsi->veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5444,7 +5444,8 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 	}
 
 	if (vsi->floating_veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head,
+			list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5452,7 +5453,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* Remove all macvlan filters of the VSI */
 	i40e_vsi_remove_all_macvlan_filter(vsi);
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		rte_free(f);
 
 	if (vsi->type != I40E_VSI_MAIN &&
@@ -6055,7 +6056,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
 	i = 0;
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		mac_filter[i] = f->mac_info;
 		ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr);
 		if (ret) {
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60..374b73e4a7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -6,6 +6,7 @@
 #define _I40E_ETHDEV_H_
 
 #include <stdint.h>
+#include <sys/queue.h>
 
 #include <rte_time.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c..e41a84f1d7 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4917,7 +4917,7 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
 		}
 
 		/* Delete FDIR flows in flow list. */
-		TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 			if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
 				TAILQ_REMOVE(&pf->flow_list, flow, node);
 			}
@@ -4972,7 +4972,7 @@ i40e_flow_flush_ethertype_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete ethertype flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_ETHERTYPE) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
@@ -5000,7 +5000,7 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete tunnel flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_TUNNEL) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfc..6579b1a00b 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -1366,7 +1366,7 @@ i40e_hash_filter_flush(struct i40e_pf *pf)
 {
 	struct rte_flow *flow, *next;
 
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
 		if (flow->filter_type != RTE_ETH_FILTER_HASH)
 			continue;
 
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index 2e34140c5b..ec24046440 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -216,7 +216,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* remove all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		vlan_num = vsi->vlan_num;
 		filter_type = f->mac_info.filter_type;
 		if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -274,7 +274,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* restore all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		if (f->mac_info.filter_type == I40E_MACVLAN_PERFECT_MATCH ||
 		    f->mac_info.filter_type == I40E_MACVLAN_HASH_MATCH) {
 			/**
@@ -563,7 +563,7 @@ rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
 	rte_ether_addr_copy(mac_addr, &vf->mac_addr);
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		if (i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr)
 				!= I40E_SUCCESS)
 			PMD_DRV_LOG(WARNING, "Delete MAC failed");
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 1fe270fb22..b86d99e57d 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -1637,7 +1637,7 @@ iavf_flow_init(struct iavf_adapter *ad)
 	TAILQ_INIT(&vf->dist_parser_list);
 	rte_spinlock_init(&vf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 				     engine->type);
@@ -1663,7 +1663,7 @@ iavf_flow_uninit(struct iavf_adapter *ad)
 	struct iavf_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1733,7 +1733,7 @@ iavf_unregister_parser(struct iavf_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -1917,7 +1917,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -1946,7 +1946,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2089,7 +2089,7 @@ iavf_flow_is_valid(struct rte_flow *flow)
 	void *temp;
 
 	if (flow && flow->engine) {
-		TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 			if (engine == flow->engine)
 				return true;
 		}
@@ -2142,7 +2142,7 @@ iavf_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
 		ret = iavf_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..629e88980d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -4,6 +4,7 @@
 
 #include <errno.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 #include <sys/types.h>
 #include <unistd.h>
 
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..fadd5f2e5a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1104,7 +1104,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (!vsi || !vsi->mac_num)
 		return -EINVAL;
 
-	TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
 		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
@@ -1115,7 +1115,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (vsi->vlan_num == 0)
 		return 0;
 
-	TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
 		ret = ice_remove_vlan_filter(vsi, &v_f->vlan_info.vlan);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 66b5743abf..3e557efe0c 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1820,7 +1820,7 @@ ice_flow_init(struct ice_adapter *ad)
 	TAILQ_INIT(&pf->dist_parser_list);
 	rte_spinlock_init(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 					engine->type);
@@ -1846,7 +1846,7 @@ ice_flow_uninit(struct ice_adapter *ad)
 	struct ice_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1946,7 +1946,7 @@ ice_unregister_parser(struct ice_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -2272,7 +2272,7 @@ ice_parse_engine_create(struct ice_adapter *ad,
 	void *meta = NULL;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		int ret;
 
 		if (parser_node->parser->parse_pattern_action(ad,
@@ -2305,7 +2305,7 @@ ice_parse_engine_validate(struct ice_adapter *ad,
 	struct ice_flow_parser_node *parser_node;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2477,7 +2477,7 @@ ice_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		ret = ice_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
@@ -2541,7 +2541,7 @@ ice_flow_redirect(struct ice_adapter *ad,
 
 	rte_spinlock_lock(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		if (!p_flow->engine->redirect)
 			continue;
 		ret = p_flow->engine->redirect(ad, p_flow, rd);
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index c702e19ea5..f5867ca055 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -1231,7 +1231,7 @@ ipn3ke_flow_flush(struct rte_eth_dev *dev,
 	struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev);
 	struct rte_flow *flow, *temp;
 
-	TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
 		TAILQ_REMOVE(&hw->flow_list, flow, next);
 		rte_free(flow);
 	}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 31d857030f..ba2bf4de37 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -15099,7 +15099,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev,
 		    policy->act_cnt[i].fate_action == MLX5_FLOW_FATE_MTR)
 			next_fm = mlx5_flow_meter_find(priv,
 					policy->act_cnt[i].next_mtr_id, NULL);
-		TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
+		RTE_TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
 				   next_port, tmp) {
 			claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule));
 			tbl = container_of(color_rule->matcher->tbl,
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index a24bd9c7ae..ba4e9fca17 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -2168,7 +2168,7 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
 			priv->mtr_idx_tbl = NULL;
 		}
 	} else {
-		TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
+		RTE_TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
 			fm = &legacy_fm->fm;
 			if (mlx5_flow_meter_params_flush(dev, fm, 0))
 				return -rte_mtr_error_set(error, EINVAL,
diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c
index 27eaf380cd..7d054c38d2 100644
--- a/drivers/net/softnic/rte_eth_softnic_flow.c
+++ b/drivers/net/softnic/rte_eth_softnic_flow.c
@@ -2207,7 +2207,8 @@ pmd_flow_flush(struct rte_eth_dev *dev,
 			void *temp;
 			int status;
 
-			TAILQ_FOREACH_SAFE(flow, &table->flows, node, temp) {
+			RTE_TAILQ_FOREACH_SAFE(flow, &table->flows, node,
+				temp) {
 				/* Rule delete. */
 				status = softnic_pipeline_table_rule_delete
 						(softnic,
diff --git a/drivers/net/softnic/rte_eth_softnic_swq.c b/drivers/net/softnic/rte_eth_softnic_swq.c
index 2083d0a976..afe6f05e29 100644
--- a/drivers/net/softnic/rte_eth_softnic_swq.c
+++ b/drivers/net/softnic/rte_eth_softnic_swq.c
@@ -39,7 +39,7 @@ softnic_softnic_swq_free_keep_rxq_txq(struct pmd_internals *p)
 {
 	struct softnic_swq *swq, *tswq;
 
-	TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
+	RTE_TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
 		if ((strncmp(swq->name, "RXQ", strlen("RXQ")) == 0) ||
 			(strncmp(swq->name, "TXQ", strlen("TXQ")) == 0))
 			continue;
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..7b80370b36 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1606,7 +1606,7 @@ remove_hw_queues_from_list(struct dpaa2_dpdmai_dev *dpdmai_dev)
 
 	DPAA2_QDMA_FUNC_TRACE();
 
-	TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
+	RTE_TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
 		if (queue->dpdmai_dev == dpdmai_dev) {
 			TAILQ_REMOVE(&qdma_queue_list, queue, next);
 			rte_free(queue);
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 7017124414..3ebf62e697 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -434,7 +434,7 @@ struct rte_bbdev_callback;
 struct rte_intr_handle;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+RTE_TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
 
 /**
  * @internal The data structure associated with a device. Drivers can access
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..f86bf2260b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -879,7 +879,7 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 struct rte_cryptodev_callback;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+RTE_TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
 
 /**
  * Structure used to hold information about the callbacks to be called for a
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..9542cbf263 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -66,7 +66,7 @@ struct rte_cryptodev_global {
 
 /* Cryptodev driver, containing the driver ID */
 struct cryptodev_driver {
-	TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
 	const struct rte_driver *driver;
 	uint8_t id;
 };
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index 23aaf8b7e4..7edc6798fe 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -9,6 +9,7 @@
 #include <stdio.h>
 #include <string.h>
 #include <stdarg.h>
+#include <sys/queue.h>
 
 #include <rte_bus.h>
 #include <rte_class.h>
@@ -18,6 +19,7 @@
 #include <rte_errno.h>
 #include <rte_kvargs.h>
 #include <rte_log.h>
+#include <rte_os.h>
 #include <rte_tailq.h>
 #include "eal_private.h"
 
@@ -291,7 +293,7 @@ rte_devargs_insert(struct rte_devargs **da)
 	if (*da == NULL || (*da)->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
 		if (listed_da == *da)
 			/* devargs already in the list */
 			return 0;
@@ -358,7 +360,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
 	if (devargs == NULL || devargs->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
 		if (strcmp(d->bus->name, devargs->bus->name) == 0 &&
 		    strcmp(d->name, devargs->name) == 0) {
 			TAILQ_REMOVE(&devargs_list, d, next);
diff --git a/lib/eal/common/eal_common_fbarray.c b/lib/eal/common/eal_common_fbarray.c
index 3a28a53247..75168ca552 100644
--- a/lib/eal/common/eal_common_fbarray.c
+++ b/lib/eal/common/eal_common_fbarray.c
@@ -9,6 +9,7 @@
 #include <errno.h>
 #include <string.h>
 #include <unistd.h>
+#include <sys/queue.h>
 
 #include <rte_common.h>
 #include <rte_eal_paging.h>
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index ec8fe23a7f..1be35f5397 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -10,6 +10,7 @@
 #include <errno.h>
 #include <regex.h>
 #include <fnmatch.h>
+#include <sys/queue.h>
 
 #include <rte_eal.h>
 #include <rte_log.h>
diff --git a/lib/eal/common/eal_common_memalloc.c b/lib/eal/common/eal_common_memalloc.c
index e872c6533b..aefdf8de3f 100644
--- a/lib/eal/common/eal_common_memalloc.c
+++ b/lib/eal/common/eal_common_memalloc.c
@@ -3,6 +3,7 @@
  */
 
 #include <string.h>
+#include <sys/queue.h>
 
 #include <rte_errno.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index ff5861b5f3..2cc74b4472 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -6,6 +6,7 @@
 #include <stdlib.h>
 #include <unistd.h>
 #include <string.h>
+#include <sys/queue.h>
 #ifndef RTE_EXEC_ENV_WINDOWS
 #include <syslog.h>
 #endif
@@ -283,7 +284,7 @@ eal_option_device_parse(void)
 	void *tmp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
 		if (ret == 0) {
 			ret = rte_devargs_add(devopt->type, devopt->arg);
 			if (ret)
diff --git a/lib/eal/common/eal_trace.h b/lib/eal/common/eal_trace.h
index 06751eb23a..76fbcd86b0 100644
--- a/lib/eal/common/eal_trace.h
+++ b/lib/eal/common/eal_trace.h
@@ -5,6 +5,8 @@
 #ifndef __EAL_TRACE_H
 #define __EAL_TRACE_H
 
+#include <sys/queue.h>
+
 #include <rte_cycles.h>
 #include <rte_log.h>
 #include <rte_malloc.h>
diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
index 627f0483ab..06f30ce238 100644
--- a/lib/eal/freebsd/include/rte_os.h
+++ b/lib/eal/freebsd/include/rte_os.h
@@ -11,6 +11,21 @@
  */
 
 #include <pthread_np.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
+
 
 typedef cpuset_t rte_cpuset_t;
 #define RTE_HAS_CPUSET
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index 80b154fb98..84d364df3f 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -19,13 +19,12 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_log.h>
 #include <rte_dev.h>
 
 /** Double linked list of buses */
-TAILQ_HEAD(rte_bus_list, rte_bus);
+RTE_TAILQ_HEAD(rte_bus_list, rte_bus);
 
 
 /**
@@ -250,7 +249,7 @@ typedef enum rte_iova_mode (*rte_bus_get_iommu_class_t)(void);
  * A structure describing a generic bus.
  */
 struct rte_bus {
-	TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
+	RTE_TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
 	const char *name;            /**< Name of the bus */
 	rte_bus_scan_t scan;         /**< Scan for devices attached to bus */
 	rte_bus_probe_t probe;       /**< Probe devices on bus */
diff --git a/lib/eal/include/rte_class.h b/lib/eal/include/rte_class.h
index 856d09b22d..d560339652 100644
--- a/lib/eal/include/rte_class.h
+++ b/lib/eal/include/rte_class.h
@@ -22,18 +22,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
-
 #include <rte_dev.h>
 
 /** Double linked list of classes */
-TAILQ_HEAD(rte_class_list, rte_class);
+RTE_TAILQ_HEAD(rte_class_list, rte_class);
 
 /**
  * A structure describing a generic device class.
  */
 struct rte_class {
-	TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
+	RTE_TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
 	const char *name; /**< Name of the class */
 	rte_dev_iterate_t dev_iterate; /**< Device iterator. */
 };
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6dd72c11a1..f6efe0c94e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -18,7 +18,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_compat.h>
@@ -75,7 +74,7 @@ struct rte_mem_resource {
  * A structure describing a device driver.
  */
 struct rte_driver {
-	TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
 	const char *name;                   /**< Driver name. */
 	const char *alias;              /**< Driver alias. */
 };
@@ -90,7 +89,7 @@ struct rte_driver {
  * A structure describing a generic device.
  */
 struct rte_device {
-	TAILQ_ENTRY(rte_device) next; /**< Next device */
+	RTE_TAILQ_ENTRY(rte_device) next; /**< Next device */
 	const char *name;             /**< Device name */
 	const struct rte_driver *driver; /**< Driver assigned after probing */
 	const struct rte_bus *bus;    /**< Bus handle assigned on scan */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index cd90944fe8..957477b398 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -21,7 +21,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 #include <rte_compat.h>
 #include <rte_bus.h>
 
@@ -76,7 +75,7 @@ enum rte_devtype {
  */
 struct rte_devargs {
 	/** Next in list. */
-	TAILQ_ENTRY(rte_devargs) next;
+	RTE_TAILQ_ENTRY(rte_devargs) next;
 	/** Type of device. */
 	enum rte_devtype type;
 	/** Device policy. */
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index b706bb8710..bb3523467b 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -21,7 +21,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdarg.h>
 #include <stdbool.h>
-#include <sys/queue.h>
 
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/eal/include/rte_service.h b/lib/eal/include/rte_service.h
index c7d037d862..1c9275c32a 100644
--- a/lib/eal/include/rte_service.h
+++ b/lib/eal/include/rte_service.h
@@ -29,7 +29,6 @@ extern "C" {
 
 #include<stdio.h>
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index b6fe4e5f78..b32033ad66 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -15,17 +15,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <stdio.h>
 #include <rte_debug.h>
 
 /** dummy structure type used by the rte_tailq APIs */
 struct rte_tailq_entry {
-	TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
+	RTE_TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
 	void *data; /**< Pointer to the data referenced by this tailq entry */
 };
 /** dummy */
-TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
+RTE_TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
 
 #define RTE_TAILQ_NAMESIZE 32
 
@@ -48,7 +47,7 @@ struct rte_tailq_elem {
 	 * rte_eal_tailqs_init()
 	 */
 	struct rte_tailq_head *head;
-	TAILQ_ENTRY(rte_tailq_elem) next;
+	RTE_TAILQ_ENTRY(rte_tailq_elem) next;
 	const char name[RTE_TAILQ_NAMESIZE];
 };
 
@@ -125,14 +124,6 @@ RTE_INIT(tailqinitfn_ ##t) \
 		rte_panic("Cannot initialize tailq: %s\n", t.name); \
 }
 
-/* This macro permits both remove and free var within the loop safely.*/
-#ifndef TAILQ_FOREACH_SAFE
-#define TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
-	for ((var) = TAILQ_FIRST((head));			\
-	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1);	\
-	    (var) = (tvar))
-#endif
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
index 1618b4df22..ce5b0aed52 100644
--- a/lib/eal/linux/include/rte_os.h
+++ b/lib/eal/linux/include/rte_os.h
@@ -11,6 +11,21 @@
  */
 
 #include <sched.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
+
 
 #ifdef CPU_SETSIZE /* may require _GNU_SOURCE */
 typedef cpu_set_t rte_cpuset_t;
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index e5dc54efb8..103c1f909d 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -4,6 +4,7 @@
 
 #include <stdatomic.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 
 #include <rte_alarm.h>
 #include <rte_spinlock.h>
diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
index 66c711d458..54892ab89c 100644
--- a/lib/eal/windows/include/rte_os.h
+++ b/lib/eal/windows/include/rte_os.h
@@ -18,6 +18,37 @@
 extern "C" {
 #endif
 
+#define RTE_TAILQ_HEAD(name, type) \
+struct name { \
+	struct type *tqh_first; /* first element */ \
+	struct type **tqh_last; /* addr of last next element */ \
+}
+#define RTE_TAILQ_ENTRY(type) \
+struct { \
+	struct type *tqe_next; /* next element */ \
+	struct type **tqe_prev; /* address of previous next element */ \
+}
+#define RTE_TAILQ_FOREACH(var, head, field) \
+	for ((var) = RTE_TAILQ_FIRST((head)); \
+	    (var); \
+	    (var) = RTE_TAILQ_NEXT((var), field))
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) ((head)->tqh_first)
+#define RTE_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next)
+#define RTE_STAILQ_HEAD(name, type) \
+struct name { \
+	struct type *stqh_first;/* first element */ \
+	struct type **stqh_last;/* addr of last next element */ \
+}
+#define RTE_STAILQ_ENTRY(type) \
+struct { \
+	struct type *stqe_next; /* next element */ \
+}
+
+
 /* cpu_set macros implementation */
 #define RTE_CPU_AND(dst, src1, src2) CPU_AND(dst, src1, src2)
 #define RTE_CPU_OR(dst, src1, src2) CPU_OR(dst, src1, src2)
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 77f46809f8..5bf517fee9 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -759,7 +759,7 @@ rte_efd_free(struct rte_efd_table *table)
 	efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list);
 	rte_mcfg_tailq_write_lock();
 
-	TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
 		if (te->data == (void *) table) {
 			TAILQ_REMOVE(efd_list, te, next);
 			rte_free(te);
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..d2c9ec42c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -21,7 +21,7 @@
 
 struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
+RTE_TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
 struct rte_eth_dev;
 
diff --git a/lib/hash/rte_fbk_hash.h b/lib/hash/rte_fbk_hash.h
index c4d6976d2b..9c3a61c1d6 100644
--- a/lib/hash/rte_fbk_hash.h
+++ b/lib/hash/rte_fbk_hash.h
@@ -17,7 +17,6 @@
 
 #include <stdint.h>
 #include <errno.h>
-#include <sys/queue.h>
 
 #ifdef __cplusplus
 extern "C" {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index d5a95a6e00..696a1121e2 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2021 Intel Corporation
  */
 
+#include <sys/queue.h>
+
 #include <rte_thash.h>
 #include <rte_tailq.h>
 #include <rte_random.h>
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index 0bfe64b14e..80f931c32a 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -62,7 +62,7 @@ struct ip_frag_key {
  * First two entries in the frags[] array are for the last and first fragments.
  */
 struct ip_frag_pkt {
-	TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
+	RTE_TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
 	struct ip_frag_key key;           /**< fragmentation key */
 	uint64_t             start;       /**< creation timestamp */
 	uint32_t             total_size;  /**< expected reassembled size */
@@ -83,7 +83,7 @@ struct rte_ip_frag_death_row {
 	/**< mbufs to be freed */
 };
 
-TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
+RTE_TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
 
 /** fragmentation table statistics */
 struct ip_frag_tbl_stat {
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 59a588425b..c5f859ae71 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1337,7 +1337,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 
 	rte_mcfg_mempool_read_lock();
 
-	TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
+	RTE_TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
 		(*func)((struct rte_mempool *) te->data, arg);
 	}
 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4235d6f0bf..f57ecbd6fc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -38,7 +38,6 @@
 #include <stdint.h>
 #include <errno.h>
 #include <inttypes.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_spinlock.h>
@@ -141,7 +140,7 @@ struct rte_mempool_objsz {
  * double-frees.
  */
 struct rte_mempool_objhdr {
-	STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;          /**< The mempool owning the object. */
 	rte_iova_t iova;                 /**< IO address of the object. */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -152,7 +151,7 @@ struct rte_mempool_objhdr {
 /**
  * A list of object headers type
  */
-STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
+RTE_STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 
@@ -171,7 +170,7 @@ struct rte_mempool_objtlr {
 /**
  * A list of memory where objects are stored
  */
-STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
+RTE_STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
 
 /**
  * Callback used to free a memory chunk
@@ -186,7 +185,7 @@ typedef void (rte_mempool_memchunk_free_cb_t)(struct rte_mempool_memhdr *memhdr,
  * and physically contiguous.
  */
 struct rte_mempool_memhdr {
-	STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;  /**< The mempool owning the chunk */
 	void *addr;              /**< Virtual address of the chunk */
 	rte_iova_t iova;         /**< IO address of the chunk */
diff --git a/lib/pci/rte_pci.h b/lib/pci/rte_pci.h
index 1f33d687f4..71cbd441c7 100644
--- a/lib/pci/rte_pci.h
+++ b/lib/pci/rte_pci.h
@@ -18,7 +18,6 @@ extern "C" {
 
 #include <stdio.h>
 #include <limits.h>
-#include <sys/queue.h>
 #include <inttypes.h>
 #include <sys/types.h>
 
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 16718ca7f1..43ce1a29d4 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -26,7 +26,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdint.h>
 #include <string.h>
-#include <sys/queue.h>
 #include <errno.h>
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h
index e23f2304c6..f93e5f3f95 100644
--- a/lib/table/rte_swx_table.h
+++ b/lib/table/rte_swx_table.h
@@ -16,7 +16,8 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
+
+#include <rte_os.h>
 
 /** Match type. */
 enum rte_swx_table_match_type {
@@ -68,7 +69,7 @@ struct rte_swx_table_entry {
 	/** Used to facilitate the membership of this table entry to a
 	 * linked list.
 	 */
-	TAILQ_ENTRY(rte_swx_table_entry) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_entry) node;
 
 	/** Key value for the current entry. Array of *key_size* bytes or NULL
 	 * if the *key_size* for the current table is 0.
@@ -111,7 +112,7 @@ struct rte_swx_table_entry {
 };
 
 /** List of table entries. */
-TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
+RTE_TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
 
 /**
  * Table memory footprint get
diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h
index 71b6a74810..62988d2856 100644
--- a/lib/table/rte_swx_table_selector.h
+++ b/lib/table/rte_swx_table_selector.h
@@ -16,7 +16,6 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_compat.h>
 
@@ -56,7 +55,7 @@ struct rte_swx_table_selector_params {
 /** Group member parameters. */
 struct rte_swx_table_selector_member {
 	/** Linked list connectivity. */
-	TAILQ_ENTRY(rte_swx_table_selector_member) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_selector_member) node;
 
 	/** Member ID. */
 	uint32_t member_id;
@@ -66,7 +65,7 @@ struct rte_swx_table_selector_member {
 };
 
 /** List of group members. */
-TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
+RTE_TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
 
 /** Group parameters. */
 struct rte_swx_table_selector_group {
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e0b67721b6..e4a445e709 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -32,7 +32,7 @@ vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_pending_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -100,7 +100,8 @@ vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next,
+				temp_node) {
 		if (node->iova < iova)
 			continue;
 		if (node->iova >= iova + size)
@@ -121,7 +122,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -141,7 +142,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
 
 	entry_idx = rte_rand() % vq->iotlb_cache_nr;
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		if (!entry_idx) {
 			TAILQ_REMOVE(&vq->iotlb_list, node, next);
 			rte_mempool_put(vq->iotlb_pool, node);
@@ -218,7 +219,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		/* Sorted list */
 		if (unlikely(iova + size < node->iova))
 			break;
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h
index bfada387b0..b0f494815f 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/rte_vdpa_dev.h
@@ -71,7 +71,7 @@ struct rte_vdpa_dev_ops {
  * vdpa device structure includes device address and device operations.
  */
 struct rte_vdpa_device {
-	TAILQ_ENTRY(rte_vdpa_device) next;
+	RTE_TAILQ_ENTRY(rte_vdpa_device) next;
 	/** Generic device information */
 	struct rte_device *device;
 	/** vdpa device operations */
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 99a926a772..6dd91859ac 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -115,7 +115,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 	int ret = -1;
 
 	rte_spinlock_lock(&vdpa_device_list_lock);
-	TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
+	RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
 		if (dev != cur_dev)
 			continue;
 
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] Minutes of Technical Board Meeting, 2021-08-11
       [not found]     <e600e472-2b39-7f07-d20e-9d6fe8e6d515@intel.com>
@ 2021-08-16  9:34  3% ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-08-16  9:34 UTC (permalink / raw)
  To: techboard; +Cc: dev

Minutes of Technical Board Meeting, 2021-08-11

Members Attending: 8/12
   - Aaron Conole
   - Ferruh Yigit (Chair)
   - Hemant Agrawal
   - Honnappa Nagarahalli
   - Jerin Jacob
   - Kevin Traynor
   - Konstantin Ananyev
   - Stephen Hemminger

NOTE: The Technical Board meetings take place every second Wednesday
on https://meet.jit.si/DPDK at 3 pm UTC.
Meetings are public, and DPDK community members are welcome to attend.
Agenda and minutes can be found at http://core.dpdk.org/techboard/minutes

NOTE: Next meeting will be on Wednesday 2021-08-25 @3pm UTC,
and will be chaired by Hemant.


#1 Extending stable ABI / API to two years
    * No decision given yet, left decision to next meeting.
    * Can continue executing the tasks listed in the excel sheet during v21.11

#2 Documenting criteria on adding/removing members to technical board
    * Document needs further reviews, please review.
    * Will set a deadline for the document review in next meeting.

#3 Atomic API
    * Atomic built-ins used because of old compilers.
    * If we can drop old compiler support, we can switch to atomic APIs.
    * Discussion to drop RHEL7 is going on in the mail list.

#4 Exception path sample app
    * No objection to have the sample app in principal.
    * Details and design can be discussed more when patches are available.

#5 github repo access for extending CI for Arm support
    * Honnappa and Aaron will figure out the details on what is exactly required
    * Later we can crate an policy around it, right now it is for
      Thomas/Aaron/Honnappa to manage.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] version: 21.11-rc0
  2021-08-08 19:26 11% [dpdk-dev] [PATCH] version: 21.11-rc0 Thomas Monjalon
  2021-08-12 14:36  0% ` Ferruh Yigit
@ 2021-08-17  6:34  4% ` David Marchand
  2021-08-17 12:04  4%   ` [dpdk-dev] [dpdk-ci] " Lincoln Lavoie
  1 sibling, 1 reply; 200+ results
From: David Marchand @ 2021-08-17  6:34 UTC (permalink / raw)
  To: Thomas Monjalon, ci; +Cc: dev, Ray Kinsella

On Sun, Aug 8, 2021 at 9:27 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
> new file mode 100644
> index 0000000000..d707a554ef
> --- /dev/null
> +++ b/doc/guides/rel_notes/release_21_11.rst
> @@ -0,0 +1,136 @@

[snip]

> +Known Issues
> +------------
> +
> +.. This section should contain new known issues in this release. Sample format:
> +
> +   * **Add title in present tense with full stop.**
> +
> +     Add a short 1-2 sentence description of the known issue
> +     in the present tense. Add information on any known workarounds.
> +
> +   This section is a comment. Do not overwrite or remove it.
> +   Also, make sure to start the actual text at the margin.
> +   =======================================================
> +
> +

The known issue "**Last mbuf segment not implicitly reset.**" added in
21.08 release notes still applies to 21.11.
But this can be fixed later, patches are starting to accumulate and
some CI failures are due to patches being applied to 21.08.

The rest lgtm, so:
Acked-by: David Marchand <david.marchand@redhat.com>

Applied, thanks.


On this last subject, this mail is a ping to CI labs owners.
21.11 release won't preserve ABI compat with previous releases, so
please disable ABI checks until 22.02.


-- 
David Marchand


^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH 21.11 v2 0/3] octeontx build only on 64-bit Linux
  @ 2021-08-17  8:46  0%   ` David Marchand
  0 siblings, 0 replies; 200+ results
From: David Marchand @ 2021-08-17  8:46 UTC (permalink / raw)
  To: Pavan Nikhilesh; +Cc: dev, Thomas Monjalon, Jerin Jacob Kollanukkaran

On Thu, Mar 25, 2021 at 3:52 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> This is a reorg of the patches from Pavan.
> It has been discussed that it should wait for DPDK 21.11
> for ABI compatibility reason.
>
> Pavan Nikhilesh (3):
>   net/thunderx: enable build only on 64-bit Linux
>   common/octeontx: enable build only on 64-bit Linux
>   common/octeontx2: enable build only on 64-bit Linux
>
>  drivers/common/octeontx/meson.build   |  6 ++++++
>  drivers/common/octeontx2/meson.build  |  4 ++--
>  drivers/compress/octeontx/meson.build |  6 ++++++
>  drivers/crypto/octeontx/meson.build   |  7 +++++--
>  drivers/event/octeontx/meson.build    |  6 ++++++
>  drivers/event/octeontx2/meson.build   |  4 ++--
>  drivers/mempool/octeontx/meson.build  |  5 +++--
>  drivers/mempool/octeontx2/meson.build |  9 ++-------
>  drivers/net/octeontx/meson.build      |  4 ++--
>  drivers/net/octeontx2/meson.build     | 10 ++--------
>  drivers/net/thunderx/meson.build      |  4 ++--
>  drivers/raw/octeontx2_dma/meson.build | 10 ++++++----
>  12 files changed, 44 insertions(+), 31 deletions(-)

There were a couple of cleanups (indent etc..) and changes in meson files.
This series does not apply cleanly on the main branch.
Could you rebase it?

I noticed that the net/cnxk driver does not have this check, but it is
disabled anyway since it depends on common/cnxk.
Is it worth adding the check for consistency?


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-ci] [PATCH] version: 21.11-rc0
  2021-08-17  6:34  4% ` [dpdk-dev] " David Marchand
@ 2021-08-17 12:04  4%   ` Lincoln Lavoie
  2021-08-17 15:19  0%     ` David Marchand
  2021-08-24  7:58  3%     ` David Marchand
  0 siblings, 2 replies; 200+ results
From: Lincoln Lavoie @ 2021-08-17 12:04 UTC (permalink / raw)
  To: David Marchand; +Cc: Thomas Monjalon, ci, dev, Ray Kinsella

Hi David,

ABI testing was disable / stopped on Friday in the Community CI lab.
Patches from before that for 21.11 would have still had the test run and
could have failures listed. I'm not sure if there is a way to "remove"
those failure marks from patchworks.  But, for all new patches since then,
ABI hasn't been run.

Cheers,
Lincoln

On Tue, Aug 17, 2021 at 2:34 AM David Marchand <david.marchand@redhat.com>
wrote:

> On Sun, Aug 8, 2021 at 9:27 PM Thomas Monjalon <thomas@monjalon.net>
> wrote:
> > diff --git a/doc/guides/rel_notes/release_21_11.rst
> b/doc/guides/rel_notes/release_21_11.rst
> > new file mode 100644
> > index 0000000000..d707a554ef
> > --- /dev/null
> > +++ b/doc/guides/rel_notes/release_21_11.rst
> > @@ -0,0 +1,136 @@
>
> [snip]
>
> > +Known Issues
> > +------------
> > +
> > +.. This section should contain new known issues in this release. Sample
> format:
> > +
> > +   * **Add title in present tense with full stop.**
> > +
> > +     Add a short 1-2 sentence description of the known issue
> > +     in the present tense. Add information on any known workarounds.
> > +
> > +   This section is a comment. Do not overwrite or remove it.
> > +   Also, make sure to start the actual text at the margin.
> > +   =======================================================
> > +
> > +
>
> The known issue "**Last mbuf segment not implicitly reset.**" added in
> 21.08 release notes still applies to 21.11.
> But this can be fixed later, patches are starting to accumulate and
> some CI failures are due to patches being applied to 21.08.
>
> The rest lgtm, so:
> Acked-by: David Marchand <david.marchand@redhat.com>
>
> Applied, thanks.
>
>
> On this last subject, this mail is a ping to CI labs owners.
> 21.11 release won't preserve ABI compat with previous releases, so
> please disable ABI checks until 22.02.
>
>
> --
> David Marchand
>
>

-- 
*Lincoln Lavoie*
Principal Engineer, Broadband Technologies
21 Madbury Rd., Ste. 100, Durham, NH 03824
lylavoie@iol.unh.edu
https://www.iol.unh.edu
+1-603-674-2755 (m)
<https://www.iol.unh.edu>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [dpdk-ci] [PATCH] version: 21.11-rc0
  2021-08-17 12:04  4%   ` [dpdk-dev] [dpdk-ci] " Lincoln Lavoie
@ 2021-08-17 15:19  0%     ` David Marchand
  2021-08-17 16:02  0%       ` Ali Alnubani
  2021-08-24  7:58  3%     ` David Marchand
  1 sibling, 1 reply; 200+ results
From: David Marchand @ 2021-08-17 15:19 UTC (permalink / raw)
  To: Lincoln Lavoie; +Cc: Thomas Monjalon, ci, dev, Ray Kinsella, Ali Alnubani

On Tue, Aug 17, 2021 at 2:04 PM Lincoln Lavoie <lylavoie@iol.unh.edu> wrote:
>
> Hi David,
>
> ABI testing was disable / stopped on Friday in the Community CI lab.  Patches from before that for 21.11 would have still had the test run and could have failures listed. I'm not sure if there is a way to "remove" those failure marks from patchworks.  But, for all new patches since then, ABI hasn't been run.

I don't think we can easily clean those reports in patchwork.
Copying Ali, in case he has an idea but otherwise we can live with this.

Thanks Lincoln.


-- 
David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-ci] [PATCH] version: 21.11-rc0
  2021-08-17 15:19  0%     ` David Marchand
@ 2021-08-17 16:02  0%       ` Ali Alnubani
  0 siblings, 0 replies; 200+ results
From: Ali Alnubani @ 2021-08-17 16:02 UTC (permalink / raw)
  To: David Marchand, Lincoln Lavoie, NBU-Contact-Thomas Monjalon
  Cc: ci, dev, Ray Kinsella

Hi,

> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, August 17, 2021 6:20 PM
> To: Lincoln Lavoie <lylavoie@iol.unh.edu>
> Cc: NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; ci@dpdk.org;
> dev <dev@dpdk.org>; Ray Kinsella <mdr@ashroe.eu>; Ali Alnubani
> <alialnu@nvidia.com>
> Subject: Re: [dpdk-ci] [PATCH] version: 21.11-rc0
> 
> On Tue, Aug 17, 2021 at 2:04 PM Lincoln Lavoie <lylavoie@iol.unh.edu>
> wrote:
> >
> > Hi David,
> >
> > ABI testing was disable / stopped on Friday in the Community CI lab.
> Patches from before that for 21.11 would have still had the test run and could
> have failures listed. I'm not sure if there is a way to "remove" those failure
> marks from patchworks.  But, for all new patches since then, ABI hasn't been
> run.
> 
> I don't think we can easily clean those reports in patchwork.
> Copying Ali, in case he has an idea but otherwise we can live with this.
> 

We can override each check by another one with "success" as the status and "skipped" as the description maybe?

> Thanks Lincoln.
> 
> 
> --
> David Marchand


^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH 2/4] mempool: add non-IO flag
  @ 2021-08-18  9:07  4% ` Dmitry Kozlyuk
  0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2021-08-18  9:07 UTC (permalink / raw)
  To: dev; +Cc: Matan Azrad, Olivier Matz, Andrew Rybchenko

Mempool is a generic allocator that is not necessarily used for device
IO operations and its memory for DMA. Add MEMPOOL_F_NON_IO flag to mark
such mempools.

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 doc/guides/rel_notes/release_21_11.rst | 3 +++
 lib/mempool/rte_mempool.h              | 4 ++++
 2 files changed, 7 insertions(+)

diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a554ef..dc9b98b862 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -84,6 +84,9 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* mempool: Added ``MEMPOOL_F_NON_IO`` flag to give a hint to DPDK components
+  that objects from this pool will not be used for device IO (e.g. DMA).
+
 
 ABI Changes
 -----------
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 1e9b8f0229..7f0657ab16 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -263,6 +263,7 @@ struct rte_mempool {
 #define MEMPOOL_F_SC_GET         0x0008 /**< Default get is "single-consumer".*/
 #define MEMPOOL_F_POOL_CREATED   0x0010 /**< Internal: pool is created. */
 #define MEMPOOL_F_NO_IOVA_CONTIG 0x0020 /**< Don't need IOVA contiguous objs. */
+#define MEMPOOL_F_NON_IO         0x0040 /**< Not used for device IO (DMA). */
 
 /**
  * @internal When debug is enabled, store some statistics.
@@ -992,6 +993,9 @@ typedef void (rte_mempool_ctor_t)(struct rte_mempool *, void *);
  *     "single-consumer". Otherwise, it is "multi-consumers".
  *   - MEMPOOL_F_NO_IOVA_CONTIG: If set, allocated objects won't
  *     necessarily be contiguous in IO memory.
+ *   - MEMPOOL_F_NO_IO: If set, the mempool is considered to be
+ *     never used for device IO, i.e. DMA operations,
+ *     which may affect some PMD behavior.
  * @return
  *   The pointer to the new allocated mempool, on success. NULL on error
  *   with rte_errno set appropriately. Possible rte_errno values include:
-- 
2.25.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v3 0/6] Enable the internal EAL thread API
  @ 2021-08-18 13:44  4% ` Narcisa Ana Maria Vasile
  2021-08-18 13:44  4%   ` [dpdk-dev] [PATCH v3 2/6] eal: add function for control thread creation Narcisa Ana Maria Vasile
  0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-08-18 13:44 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

This patchset enables the new EAL thread API.
The newly defined thread attributes, priority and affinity,
are used in eal/windows when creating the threads. Similarly, 
some changes have been done in eal/linux/eal.c and eal/freebsd/eal.c
to initialize priority to a default value and set thread attributes.

The user is offered the option of either using the rte_thread_* API or
a 3rd party thread library, through a meson flag
called "use_external_thread_lib".
By default, this flag is set to FALSE, which means Windows libraries
and applications will use the EAL rte_thread_* API 
defined in windows/rte_thread.c for managing threads.
When the flag is set to TRUE, the common/rte_thread.c file is compiled
and an external thread library is used.

This patchset adds a new function for creating control threads that
uses the new thread API.
It enables the usage of the new function in Windows code and common code.
The old function is kept to avoid ABI break, however, its definition
is commented away on Windows, since the pthread_t and pthread_attr_t
arguments that it receives have been replaced with the new API on Windows.
This allows testing the "eal: Add EAL API for threading" that this
patchset depends on.

The ethdev lib also contains some changes that break the ABI.
Enabling the new EAL thread API will probably require going through
the proper process of ABI changes.

Depends-on: series-18172 ("eal: Add EAL API for threading")

v3:
- use RTE_INIT to only load kernel32.dll once and get function
  pointer to SetThreadDescription()
- minor fixes

v2:
- fix typo in SetThreadDescription_type function pointer
- add Depends-on on all patches to fix apply errors.
- modify cover letter

Narcisa Vasile (6):
  eal: add function that sets thread name
  eal: add function for control thread creation
  Enable the new EAL thread API in app, drivers and examples
  lib: enable the new EAL thread API
  eal: set affinity and priority attributes
  Allow choice between internal EAL thread API and external lib

 app/test/process.h                            |   8 +-
 app/test/test_lcores.c                        |  18 +-
 app/test/test_link_bonding.c                  |  14 +-
 app/test/test_lpm_perf.c                      |  12 +-
 config/meson.build                            |   1 -
 drivers/bus/dpaa/base/qbman/bman_driver.c     |   5 +-
 drivers/bus/dpaa/base/qbman/dpaa_sys.c        |  14 +-
 drivers/bus/dpaa/base/qbman/process.c         |   6 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  14 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  19 +-
 drivers/common/dpaax/compat.h                 |   2 +-
 drivers/common/mlx5/windows/mlx5_common_os.h  |   1 +
 drivers/compress/mlx5/mlx5_compress.c         |  14 +-
 drivers/event/dlb2/dlb2.c                     |   2 +-
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |   7 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |   2 +-
 drivers/net/af_xdp/rte_eth_af_xdp.c           |  18 +-
 drivers/net/ark/ark_ethdev.c                  |   4 +-
 drivers/net/ark/ark_pktgen.c                  |   4 +-
 drivers/net/atlantic/atl_ethdev.c             |   4 +-
 drivers/net/atlantic/atl_types.h              |   4 +-
 .../net/atlantic/hw_atl/hw_atl_utils_fw2x.c   |  26 +--
 drivers/net/axgbe/axgbe_common.h              |   2 +-
 drivers/net/axgbe/axgbe_dev.c                 |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |   8 +-
 drivers/net/axgbe/axgbe_ethdev.h              |   8 +-
 drivers/net/axgbe/axgbe_i2c.c                 |   4 +-
 drivers/net/axgbe/axgbe_mdio.c                |   8 +-
 drivers/net/axgbe/axgbe_phy_impl.c            |   6 +-
 drivers/net/bnxt/bnxt.h                       |  16 +-
 drivers/net/bnxt/bnxt_cpr.c                   |   4 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  54 ++---
 drivers/net/bnxt/bnxt_irq.c                   |   8 +-
 drivers/net/bnxt/bnxt_reps.c                  |  10 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |  34 ++--
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |   4 +-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c          |  28 +--
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h          |   8 +-
 drivers/net/bnxt/tf_ulp/ulp_ha_mgr.c          |   4 +-
 drivers/net/bnxt/tf_ulp/ulp_ha_mgr.h          |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   2 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |   2 +-
 drivers/net/ena/base/ena_plat_dpdk.h          |  15 +-
 drivers/net/enic/enic.h                       |   2 +-
 drivers/net/ice/ice_dcf_parent.c              |   8 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   6 +-
 drivers/net/ixgbe/ixgbe_ethdev.h              |   2 +-
 drivers/net/mlx5/linux/mlx5_os.c              |   2 +-
 drivers/net/mlx5/mlx5.c                       |  20 +-
 drivers/net/mlx5/mlx5.h                       |   2 +-
 drivers/net/mlx5/mlx5_txpp.c                  |   8 +-
 drivers/net/mlx5/windows/mlx5_flow_os.c       |  10 +-
 drivers/net/mlx5/windows/mlx5_os.c            |   2 +-
 drivers/net/qede/base/bcm_osal.h              |   8 +-
 drivers/net/vhost/rte_eth_vhost.c             |  24 +--
 .../net/virtio/virtio_user/virtio_user_dev.c  |  30 +--
 .../net/virtio/virtio_user/virtio_user_dev.h  |   2 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |  49 +++--
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |  24 +--
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  51 ++---
 examples/kni/main.c                           |   1 +
 .../pthread_shim/pthread_shim.h               |   1 +
 lib/eal/common/eal_common_options.c           |   6 +-
 lib/eal/common/eal_common_thread.c            | 105 +++++++++-
 lib/eal/common/eal_common_trace.c             |   1 +
 lib/eal/common/eal_private.h                  |   2 +-
 lib/eal/common/eal_thread.h                   |   6 +
 lib/eal/common/malloc_mp.c                    |   2 +
 lib/eal/common/rte_thread.c                   |  17 ++
 lib/eal/freebsd/eal.c                         |  53 +++--
 lib/eal/freebsd/eal_alarm.c                   |  12 +-
 lib/eal/freebsd/eal_interrupts.c              |   6 +-
 lib/eal/freebsd/eal_thread.c                  |  10 +-
 lib/eal/include/rte_lcore.h                   |   6 +
 lib/eal/include/rte_per_lcore.h               |   2 +-
 lib/eal/include/rte_thread.h                  |  43 ++++
 lib/eal/linux/eal.c                           |  55 +++--
 lib/eal/linux/eal_alarm.c                     |  10 +-
 lib/eal/linux/eal_interrupts.c                |   8 +-
 lib/eal/linux/eal_thread.c                    |  11 +-
 lib/eal/linux/eal_timer.c                     |   6 +-
 lib/eal/version.map                           |   6 +-
 lib/eal/windows/eal.c                         |  44 +++-
 lib/eal/windows/eal_interrupts.c              |   8 +-
 lib/eal/windows/eal_thread.c                  |  35 +---
 lib/eal/windows/eal_windows.h                 |  10 -
 lib/eal/windows/include/pthread.h             | 192 ------------------
 lib/eal/windows/include/rte_windows.h         |   1 +
 lib/eal/windows/meson.build                   |   7 +-
 lib/eal/windows/rte_thread.c                  |  68 +++++++
 lib/ethdev/rte_ethdev.c                       |   4 +-
 lib/ethdev/rte_ethdev_core.h                  |   4 +-
 lib/ethdev/rte_flow.c                         |   4 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   1 +
 lib/vhost/vhost.c                             |   1 +
 meson_options.txt                             |   2 +
 97 files changed, 777 insertions(+), 661 deletions(-)
 delete mode 100644 lib/eal/windows/include/pthread.h

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v3 2/6] eal: add function for control thread creation
  2021-08-18 13:44  4% ` [dpdk-dev] [PATCH v3 " Narcisa Ana Maria Vasile
@ 2021-08-18 13:44  4%   ` Narcisa Ana Maria Vasile
  0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-08-18 13:44 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

The existing rte_ctrl_thread_create() function will be replaced
with rte_thread_ctrl_thread_create() that uses the internal
EAL thread API.

This patch only introduces the new control thread creation
function. Replacing of the old function needs to be done according
to the ABI change procedures, to avoid an ABI break.

Signed-off-by: Narcisa Vasile <navasile@microsoft.com>
---
 lib/eal/common/eal_common_thread.c | 81 ++++++++++++++++++++++++++++++
 lib/eal/include/rte_thread.h       | 27 ++++++++++
 lib/eal/version.map                |  1 +
 3 files changed, 109 insertions(+)

diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
index 1a52f42a2b..79545c67d9 100644
--- a/lib/eal/common/eal_common_thread.c
+++ b/lib/eal/common/eal_common_thread.c
@@ -259,6 +259,87 @@ rte_ctrl_thread_create(pthread_t *thread, const char *name,
 	return -ret;
 }
 
+struct rte_thread_ctrl_ctx {
+	rte_thread_func start_routine;
+	void *arg;
+	const char *name;
+};
+
+static void *ctrl_thread_wrapper(void *arg)
+{
+	struct internal_config *conf = eal_get_internal_configuration();
+	rte_cpuset_t *cpuset = &conf->ctrl_cpuset;
+	struct rte_thread_ctrl_ctx *ctx = arg;
+	rte_thread_func start_routine = ctx->start_routine;
+	void *routine_arg = ctx->arg;
+
+	__rte_thread_init(rte_lcore_id(), cpuset);
+
+	if (ctx->name != NULL) {
+		if (rte_thread_name_set(rte_thread_self(), ctx->name) < 0)
+			RTE_LOG(DEBUG, EAL, "Cannot set name for ctrl thread\n");
+	}
+
+	free(arg);
+
+	return start_routine(routine_arg);
+}
+
+int
+rte_thread_ctrl_thread_create(rte_thread_t *thread, const char *name,
+		rte_thread_func start_routine, void *arg)
+{
+	int ret;
+	rte_thread_attr_t attr;
+	struct internal_config *conf = eal_get_internal_configuration();
+	rte_cpuset_t *cpuset = &conf->ctrl_cpuset;
+	struct rte_thread_ctrl_ctx *ctx = NULL;
+
+	if (start_routine == NULL) {
+		ret = EINVAL;
+		goto cleanup;
+	}
+
+	ctx = malloc(sizeof(*ctx));
+	if (ctx == NULL) {
+		ret = ENOMEM;
+		goto cleanup;
+	}
+
+	ctx->start_routine = start_routine;
+	ctx->arg = arg;
+	ctx->name = name;
+
+	ret = rte_thread_attr_init(&attr);
+	if (ret != 0) {
+		RTE_LOG(DEBUG, EAL, "Cannot init ctrl thread attributes\n");
+		goto cleanup;
+	}
+
+	ret = rte_thread_attr_set_affinity(&attr, cpuset);
+	if (ret != 0) {
+		RTE_LOG(DEBUG, EAL, "Cannot set afifnity attribute for ctrl thread\n");
+		goto cleanup;
+	}
+	ret = rte_thread_attr_set_priority(&attr, RTE_THREAD_PRIORITY_NORMAL);
+	if (ret != 0) {
+		RTE_LOG(DEBUG, EAL, "Cannot set priority attribute for ctrl thread\n");
+		goto cleanup;
+	}
+
+	ret = rte_thread_create(thread, &attr, ctrl_thread_wrapper, ctx);
+	if (ret != 0) {
+		RTE_LOG(DEBUG, EAL, "Cannot create ctrl thread\n");
+		goto cleanup;
+	}
+
+	return 0;
+
+cleanup:
+	free(ctx);
+	return ret;
+}
+
 int
 rte_thread_register(void)
 {
diff --git a/lib/eal/include/rte_thread.h b/lib/eal/include/rte_thread.h
index 2f6258e336..e34101cc98 100644
--- a/lib/eal/include/rte_thread.h
+++ b/lib/eal/include/rte_thread.h
@@ -455,6 +455,33 @@ int rte_thread_barrier_destroy(rte_thread_barrier *barrier);
 __rte_experimental
 int rte_thread_name_set(rte_thread_t thread_id, const char *name);
 
+/**
+ * Create a control thread.
+ *
+ * Set affinity and thread name. The affinity of the new thread is based
+ * on the CPU affinity retrieved at the time rte_eal_init() was called,
+ * the dataplane and service lcores are then excluded.
+ *
+ * @param thread
+ *   Filled with the thread id of the new created thread.
+ *
+ * @param name
+ *   The name of the control thread (max 16 characters including '\0').
+ *
+ * @param start_routine
+ *   Function to be executed by the new thread.
+ *
+ * @param arg
+ *   Argument passed to start_routine.
+ *
+ * @return
+ *   On success, return 0;
+ *   On failure, return a positive errno-style error number.
+ */
+__rte_experimental
+int rte_thread_ctrl_thread_create(rte_thread_t *thread, const char *name,
+		rte_thread_func start_routine, void *arg);
+
 /**
  * Create a TLS data key visible to all threads in the process.
  * the created key is later used to get/set a value.
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 7ce8dcea07..67569b1bf9 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -447,6 +447,7 @@ EXPERIMENTAL {
 	rte_thread_barrier_wait;
 	rte_thread_barrier_destroy;
 	rte_thread_name_set;
+	rte_thread_ctrl_thread_create;
 };
 
 INTERNAL {
-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2] ethdev: fix representor port ID search by name
    2021-07-19  6:58  0% ` Xueming(Steven) Li
  2021-07-29  4:20  0% ` Xueming(Steven) Li
@ 2021-08-18 14:00  3% ` Andrew Rybchenko
  2021-08-27  9:18  0%   ` Xueming(Steven) Li
  2021-08-20 12:18  3% ` [dpdk-dev] [PATCH v3] " Andrew Rybchenko
  3 siblings, 1 reply; 200+ results
From: Andrew Rybchenko @ 2021-08-18 14:00 UTC (permalink / raw)
  To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
	Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, Thomas Monjalon,
	Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov, Xueming Li

From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>

Getting a list of representors from a representor does not make sense.
Instead, a parent device should be used.

To this end, extend the rte_eth_dev_data structure to include the port ID
of the parent device for representors.

Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].

Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new parert_port_id field in
rte_eth_dev_data structure. Do we care?

May be the patch should add lines to release notes, but I'd like
to get initial feedback first.

mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060

 drivers/net/bnxt/bnxt_reps.c             |  1 +
 drivers/net/enic/enic_vf_representor.c   |  1 +
 drivers/net/i40e/i40e_vf_representor.c   |  1 +
 drivers/net/ice/ice_dcf_vf_representor.c |  1 +
 drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
 drivers/net/mlx5/linux/mlx5_os.c         | 17 +++++++++++++++++
 drivers/net/mlx5/windows/mlx5_os.c       | 17 +++++++++++++++++
 lib/ethdev/ethdev_driver.h               |  6 +++---
 lib/ethdev/rte_class_eth.c               | 22 ++++++++++++++++++++--
 lib/ethdev/rte_ethdev.c                  |  8 ++++----
 lib/ethdev/rte_ethdev_core.h             |  4 ++++
 11 files changed, 70 insertions(+), 9 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index bdbad53b7d..902591cd39 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = rep_params->vf_id;
+	eth_dev->data->parent_port_id = rep_params->parent_dev->data->port_id;
 
 	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
 	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index 79dd6e5640..6ee7967ce9 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = vf->vf_id;
+	eth_dev->data->parent_port_id = pf->port_id;
 	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
 		sizeof(struct rte_ether_addr) *
 		ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..865b637585 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->parent_port_id = pf->dev_data->parent_port_id;
 
 	/* Setting the number queues allocated to the VF */
 	ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e9..c7cd3fd290 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
 
 	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	vf_rep_eth_dev->data->representor_id = repr->vf_id;
+	vf_rep_eth_dev->data->parent_port_id = repr->dcf_eth_dev->data->port_id;
 
 	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
 
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..7a2063849e 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->parent_port_id = representor->pf_ethdev->data->port_id;
 
 	/* Set representor device ops */
 	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa48..a68fa7beb7 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->parent_port_id =
+					rte_eth_devices[port_id].data->port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS) {
+			DRV_LOG(ERR, "no master device for representor");
+			err = ENODEV;
+			goto error;
+		}
 	}
 	priv->mp_id.port_id = eth_dev->data->port_id;
 	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 7e1df1c751..0c5a02bfcb 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->parent_port_id =
+					rte_eth_devices[port_id].data->port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS) {
+			DRV_LOG(ERR, "no master device for representor");
+			err = ENODEV;
+			goto error;
+		}
 	}
 	/*
 	 * Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index fd5b7ca550..d1a1499538 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1287,8 +1287,8 @@ struct rte_eth_devargs {
  * For backward compatibility, if no representor info, direct
  * map legacy VF (no controller and pf).
  *
- * @param ethdev
- *  Handle of ethdev port.
+ * @param port_id
+ *  Port ID of the backing device.
  * @param type
  *  Representor type.
  * @param controller
@@ -1305,7 +1305,7 @@ struct rte_eth_devargs {
  */
 __rte_internal
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..167d2d798c 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,14 +95,32 @@ eth_representor_cmp(const char *key __rte_unused,
 		c = i / (np * nf);
 		p = (i / nf) % np;
 		f = i % nf;
-		if (rte_eth_representor_id_get(edev,
+		/*
+		 * rte_eth_representor_id_get expects to receive port ID of
+		 * the master device, but in order to maintain compatibility
+		 * with mlx5's hardware bonding and legacy representor
+		 * specification using just VF numbers, the representor's port
+		 * ID is tried first.
+		 */
+		ret = rte_eth_representor_id_get(edev->data->port_id,
 			eth_da.type,
 			eth_da.nb_mh_controllers == 0 ? -1 :
 					eth_da.mh_controllers[c],
 			eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
 			eth_da.nb_representor_ports == 0 ? -1 :
 					eth_da.representor_ports[f],
-			&id) < 0)
+			&id);
+		if (ret == -ENOTSUP)
+			ret = rte_eth_representor_id_get(
+				edev->data->parent_port_id,
+				eth_da.type,
+				eth_da.nb_mh_controllers == 0 ? -1 :
+						eth_da.mh_controllers[c],
+				eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
+				eth_da.nb_representor_ports == 0 ? -1 :
+						eth_da.representor_ports[f],
+				&id);
+		if (ret < 0)
 			continue;
 		if (data->representor_id == id)
 			return 0;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1..228ef7bf23 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5997,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
 }
 
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id)
@@ -6013,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 		return -EINVAL;
 
 	/* Get PMD representor range info. */
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+	ret = rte_eth_representor_info_get(port_id, NULL);
 	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
 	    controller == -1 && pf == -1) {
 		/* Direct mapping for legacy VF representor. */
@@ -6028,7 +6028,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 	if (info == NULL)
 		return -ENOMEM;
 	info->nb_ranges_alloc = n;
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+	ret = rte_eth_representor_info_get(port_id, info);
 	if (ret < 0)
 		goto out;
 
@@ -6047,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 			continue;
 		if (info->ranges[i].id_end < info->ranges[i].id_base) {
 			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
-				ethdev->data->port_id, info->ranges[i].id_base,
+				port_id, info->ranges[i].id_base,
 				info->ranges[i].id_end, i);
 			continue;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..13cb84b52f 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,10 @@ struct rte_eth_dev_data {
 			/**< Switch-specific identifier.
 			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
 			 */
+	uint16_t parent_port_id;
+			/**< Port ID of the backing device.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
 
 	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
 	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-- 
2.30.2


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v4 0/6] Enable the internal EAL thread API
  @ 2021-08-18 21:19  4% ` Narcisa Ana Maria Vasile
  2021-08-18 21:19  4%   ` [dpdk-dev] [PATCH v4 2/6] eal: add function for control thread creation Narcisa Ana Maria Vasile
  0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-08-18 21:19 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

This patchset enables the new EAL thread API.
The newly defined thread attributes, priority and affinity,
are used in eal/windows when creating the threads. Similarly, 
some changes have been done in eal/linux/eal.c and eal/freebsd/eal.c
to initialize priority to a default value and set thread attributes.

The user is offered the option of either using the rte_thread_* API or
a 3rd party thread library, through a meson flag
called "use_external_thread_lib".
By default, this flag is set to FALSE, which means Windows libraries
and applications will use the EAL rte_thread_* API 
defined in windows/rte_thread.c for managing threads.
When the flag is set to TRUE, the common/rte_thread.c file is compiled
and an external thread library is used.

This patchset adds a new function for creating control threads that
uses the new thread API.
It enables the usage of the new function in Windows code and common code.
The old function is kept to avoid ABI break, however, its definition
is commented away on Windows, since the pthread_t and pthread_attr_t
arguments that it receives have been replaced with the new API on Windows.
This allows testing the "eal: Add EAL API for threading" that this
patchset depends on.

The ethdev lib also contains some changes that break the ABI.
Enabling the new EAL thread API will probably require going through
the proper process of ABI changes.

Depends-on: series-18172 ("eal: Add EAL API for threading")

v4:
- Free resources on error path
- Use RTE_FINI to unload kernel32.dll

v3:
- use RTE_INIT to only load kernel32.dll once and get function
  pointer to SetThreadDescription()
- minor fixes

v2:
- fix typo in SetThreadDescription_type function pointer
- add Depends-on on all patches to fix apply errors.
- modify cover letter

Narcisa Vasile (6):
  eal: add function that sets thread name
  eal: add function for control thread creation
  Enable the new EAL thread API in app, drivers and examples
  lib: enable the new EAL thread API
  eal: set affinity and priority attributes
  Allow choice between internal EAL thread API and external lib

 app/test/process.h                            |   8 +-
 app/test/test_lcores.c                        |  18 +-
 app/test/test_link_bonding.c                  |  14 +-
 app/test/test_lpm_perf.c                      |  12 +-
 config/meson.build                            |   1 -
 drivers/bus/dpaa/base/qbman/bman_driver.c     |   5 +-
 drivers/bus/dpaa/base/qbman/dpaa_sys.c        |  14 +-
 drivers/bus/dpaa/base/qbman/process.c         |   6 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  14 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  19 +-
 drivers/common/dpaax/compat.h                 |   2 +-
 drivers/common/mlx5/windows/mlx5_common_os.h  |   1 +
 drivers/compress/mlx5/mlx5_compress.c         |  14 +-
 drivers/event/dlb2/dlb2.c                     |   2 +-
 drivers/event/dlb2/pf/base/dlb2_osdep.h       |   7 +-
 drivers/mempool/dpaa/dpaa_mempool.c           |   2 +-
 drivers/net/af_xdp/rte_eth_af_xdp.c           |  18 +-
 drivers/net/ark/ark_ethdev.c                  |   4 +-
 drivers/net/ark/ark_pktgen.c                  |   4 +-
 drivers/net/atlantic/atl_ethdev.c             |   4 +-
 drivers/net/atlantic/atl_types.h              |   4 +-
 .../net/atlantic/hw_atl/hw_atl_utils_fw2x.c   |  26 +--
 drivers/net/axgbe/axgbe_common.h              |   2 +-
 drivers/net/axgbe/axgbe_dev.c                 |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |   8 +-
 drivers/net/axgbe/axgbe_ethdev.h              |   8 +-
 drivers/net/axgbe/axgbe_i2c.c                 |   4 +-
 drivers/net/axgbe/axgbe_mdio.c                |   8 +-
 drivers/net/axgbe/axgbe_phy_impl.c            |   6 +-
 drivers/net/bnxt/bnxt.h                       |  16 +-
 drivers/net/bnxt/bnxt_cpr.c                   |   4 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  54 ++---
 drivers/net/bnxt/bnxt_irq.c                   |   8 +-
 drivers/net/bnxt/bnxt_reps.c                  |  10 +-
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c            |  34 ++--
 drivers/net/bnxt/tf_ulp/bnxt_ulp.h            |   4 +-
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.c          |  28 +--
 drivers/net/bnxt/tf_ulp/ulp_fc_mgr.h          |   8 +-
 drivers/net/bnxt/tf_ulp/ulp_ha_mgr.c          |   4 +-
 drivers/net/bnxt/tf_ulp/ulp_ha_mgr.h          |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                |   2 +-
 drivers/net/dpaa/dpaa_rxtx.c                  |   2 +-
 drivers/net/ena/base/ena_plat_dpdk.h          |  15 +-
 drivers/net/enic/enic.h                       |   2 +-
 drivers/net/ice/ice_dcf_parent.c              |   8 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |   6 +-
 drivers/net/ixgbe/ixgbe_ethdev.h              |   2 +-
 drivers/net/mlx5/linux/mlx5_os.c              |   2 +-
 drivers/net/mlx5/mlx5.c                       |  20 +-
 drivers/net/mlx5/mlx5.h                       |   2 +-
 drivers/net/mlx5/mlx5_txpp.c                  |   8 +-
 drivers/net/mlx5/windows/mlx5_flow_os.c       |  10 +-
 drivers/net/mlx5/windows/mlx5_os.c            |   2 +-
 drivers/net/qede/base/bcm_osal.h              |   8 +-
 drivers/net/vhost/rte_eth_vhost.c             |  24 +--
 .../net/virtio/virtio_user/virtio_user_dev.c  |  30 +--
 .../net/virtio/virtio_user/virtio_user_dev.h  |   2 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |  49 +++--
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |  24 +--
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  51 ++---
 examples/kni/main.c                           |   1 +
 .../pthread_shim/pthread_shim.h               |   1 +
 lib/eal/common/eal_common_options.c           |   6 +-
 lib/eal/common/eal_common_thread.c            | 105 +++++++++-
 lib/eal/common/eal_common_trace.c             |   1 +
 lib/eal/common/eal_private.h                  |   2 +-
 lib/eal/common/eal_thread.h                   |   6 +
 lib/eal/common/malloc_mp.c                    |   2 +
 lib/eal/common/rte_thread.c                   |  17 ++
 lib/eal/freebsd/eal.c                         |  53 +++--
 lib/eal/freebsd/eal_alarm.c                   |  12 +-
 lib/eal/freebsd/eal_interrupts.c              |   6 +-
 lib/eal/freebsd/eal_thread.c                  |  10 +-
 lib/eal/include/rte_lcore.h                   |   6 +
 lib/eal/include/rte_per_lcore.h               |   2 +-
 lib/eal/include/rte_thread.h                  |  43 ++++
 lib/eal/linux/eal.c                           |  55 +++--
 lib/eal/linux/eal_alarm.c                     |  10 +-
 lib/eal/linux/eal_interrupts.c                |   8 +-
 lib/eal/linux/eal_thread.c                    |  11 +-
 lib/eal/linux/eal_timer.c                     |   6 +-
 lib/eal/version.map                           |   6 +-
 lib/eal/windows/eal.c                         |  44 +++-
 lib/eal/windows/eal_interrupts.c              |   8 +-
 lib/eal/windows/eal_thread.c                  |  35 +---
 lib/eal/windows/eal_windows.h                 |  10 -
 lib/eal/windows/include/pthread.h             | 192 ------------------
 lib/eal/windows/include/rte_windows.h         |   1 +
 lib/eal/windows/meson.build                   |   7 +-
 lib/eal/windows/rte_thread.c                  |  76 +++++++
 lib/ethdev/rte_ethdev.c                       |   4 +-
 lib/ethdev/rte_ethdev_core.h                  |   4 +-
 lib/ethdev/rte_flow.c                         |   4 +-
 lib/eventdev/rte_event_eth_rx_adapter.c       |   1 +
 lib/vhost/vhost.c                             |   1 +
 meson_options.txt                             |   2 +
 97 files changed, 785 insertions(+), 661 deletions(-)
 delete mode 100644 lib/eal/windows/include/pthread.h

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v4 2/6] eal: add function for control thread creation
  2021-08-18 21:19  4% ` [dpdk-dev] [PATCH v4 0/6] Enable the internal EAL thread API Narcisa Ana Maria Vasile
@ 2021-08-18 21:19  4%   ` Narcisa Ana Maria Vasile
  0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-08-18 21:19 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

The existing rte_ctrl_thread_create() function will be replaced
with rte_thread_ctrl_thread_create() that uses the internal
EAL thread API.

This patch only introduces the new control thread creation
function. Replacing of the old function needs to be done according
to the ABI change procedures, to avoid an ABI break.

Signed-off-by: Narcisa Vasile <navasile@microsoft.com>
---
 lib/eal/common/eal_common_thread.c | 81 ++++++++++++++++++++++++++++++
 lib/eal/include/rte_thread.h       | 27 ++++++++++
 lib/eal/version.map                |  1 +
 3 files changed, 109 insertions(+)

diff --git a/lib/eal/common/eal_common_thread.c b/lib/eal/common/eal_common_thread.c
index 1a52f42a2b..79545c67d9 100644
--- a/lib/eal/common/eal_common_thread.c
+++ b/lib/eal/common/eal_common_thread.c
@@ -259,6 +259,87 @@ rte_ctrl_thread_create(pthread_t *thread, const char *name,
 	return -ret;
 }
 
+struct rte_thread_ctrl_ctx {
+	rte_thread_func start_routine;
+	void *arg;
+	const char *name;
+};
+
+static void *ctrl_thread_wrapper(void *arg)
+{
+	struct internal_config *conf = eal_get_internal_configuration();
+	rte_cpuset_t *cpuset = &conf->ctrl_cpuset;
+	struct rte_thread_ctrl_ctx *ctx = arg;
+	rte_thread_func start_routine = ctx->start_routine;
+	void *routine_arg = ctx->arg;
+
+	__rte_thread_init(rte_lcore_id(), cpuset);
+
+	if (ctx->name != NULL) {
+		if (rte_thread_name_set(rte_thread_self(), ctx->name) < 0)
+			RTE_LOG(DEBUG, EAL, "Cannot set name for ctrl thread\n");
+	}
+
+	free(arg);
+
+	return start_routine(routine_arg);
+}
+
+int
+rte_thread_ctrl_thread_create(rte_thread_t *thread, const char *name,
+		rte_thread_func start_routine, void *arg)
+{
+	int ret;
+	rte_thread_attr_t attr;
+	struct internal_config *conf = eal_get_internal_configuration();
+	rte_cpuset_t *cpuset = &conf->ctrl_cpuset;
+	struct rte_thread_ctrl_ctx *ctx = NULL;
+
+	if (start_routine == NULL) {
+		ret = EINVAL;
+		goto cleanup;
+	}
+
+	ctx = malloc(sizeof(*ctx));
+	if (ctx == NULL) {
+		ret = ENOMEM;
+		goto cleanup;
+	}
+
+	ctx->start_routine = start_routine;
+	ctx->arg = arg;
+	ctx->name = name;
+
+	ret = rte_thread_attr_init(&attr);
+	if (ret != 0) {
+		RTE_LOG(DEBUG, EAL, "Cannot init ctrl thread attributes\n");
+		goto cleanup;
+	}
+
+	ret = rte_thread_attr_set_affinity(&attr, cpuset);
+	if (ret != 0) {
+		RTE_LOG(DEBUG, EAL, "Cannot set afifnity attribute for ctrl thread\n");
+		goto cleanup;
+	}
+	ret = rte_thread_attr_set_priority(&attr, RTE_THREAD_PRIORITY_NORMAL);
+	if (ret != 0) {
+		RTE_LOG(DEBUG, EAL, "Cannot set priority attribute for ctrl thread\n");
+		goto cleanup;
+	}
+
+	ret = rte_thread_create(thread, &attr, ctrl_thread_wrapper, ctx);
+	if (ret != 0) {
+		RTE_LOG(DEBUG, EAL, "Cannot create ctrl thread\n");
+		goto cleanup;
+	}
+
+	return 0;
+
+cleanup:
+	free(ctx);
+	return ret;
+}
+
 int
 rte_thread_register(void)
 {
diff --git a/lib/eal/include/rte_thread.h b/lib/eal/include/rte_thread.h
index 2f6258e336..e34101cc98 100644
--- a/lib/eal/include/rte_thread.h
+++ b/lib/eal/include/rte_thread.h
@@ -455,6 +455,33 @@ int rte_thread_barrier_destroy(rte_thread_barrier *barrier);
 __rte_experimental
 int rte_thread_name_set(rte_thread_t thread_id, const char *name);
 
+/**
+ * Create a control thread.
+ *
+ * Set affinity and thread name. The affinity of the new thread is based
+ * on the CPU affinity retrieved at the time rte_eal_init() was called,
+ * the dataplane and service lcores are then excluded.
+ *
+ * @param thread
+ *   Filled with the thread id of the new created thread.
+ *
+ * @param name
+ *   The name of the control thread (max 16 characters including '\0').
+ *
+ * @param start_routine
+ *   Function to be executed by the new thread.
+ *
+ * @param arg
+ *   Argument passed to start_routine.
+ *
+ * @return
+ *   On success, return 0;
+ *   On failure, return a positive errno-style error number.
+ */
+__rte_experimental
+int rte_thread_ctrl_thread_create(rte_thread_t *thread, const char *name,
+		rte_thread_func start_routine, void *arg);
+
 /**
  * Create a TLS data key visible to all threads in the process.
  * the created key is later used to get/set a value.
diff --git a/lib/eal/version.map b/lib/eal/version.map
index 7ce8dcea07..67569b1bf9 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -447,6 +447,7 @@ EXPERIMENTAL {
 	rte_thread_barrier_wait;
 	rte_thread_barrier_destroy;
 	rte_thread_name_set;
+	rte_thread_ctrl_thread_create;
 };
 
 INTERNAL {
-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v7] eal: remove sys/queue.h from public headers.
  2021-08-14  2:51  1%       ` [dpdk-dev] [PATCH v6] " William Tu
@ 2021-08-18 23:26  1%         ` William Tu
  2021-08-23 13:03  1%           ` [dpdk-dev] [PATCH v8] " William Tu
  0 siblings, 1 reply; 200+ results
From: William Tu @ 2021-08-18 23:26 UTC (permalink / raw)
  To: dev; +Cc: Dmitry.Kozliuk, nick.connolly

Currently there are some public headers that include 'sys/queue.h', which
is not POSIX, but usually provided by the Linux/BSD system library.
(Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
The file is missing on Windows. During the Windows build, DPDK uses a
bundled copy, so building a DPDK library works fine.  But when OVS or other
applications use DPDK as a library, because some DPDK public headers
include 'sys/queue.h', on Windows, it triggers an error due to no such
file.

One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
Windows environment, such as [1]. However, this means DPDK exports the
functionalities of 'sys/queue.h' into the environment, which might cause
symbols, macros, headers clashing with other applications.

The patch fixes it by removing the "#include <sys/queue.h>" from
DPDK public headers, so programs including DPDK headers don't depend
on the system to provide 'sys/queue.h'. When these public headers use
macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
in Windows EAL. Note that these RTE_ macros are compatible with
<sys/queue.h>, both at the level of API (to use with <sys/queue.h>
macros in C files) and ABI (to avoid breaking it).

Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
the patch replaces it with RTE_TAILQ_FOREACH_SAFE.

[1] http://mails.dpdk.org/archives/dev/2021-August/216304.html

Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Signed-off-by: William Tu <u9012063@gmail.com>
---
v6-v7:
* remove some redundant "#incldue <sys/queue.h>"
* remove extra newline, add comment at rte_os.h for windows
  use of bundled sys/queue

v5-v6:
* fix tab/indent issue, fix type and spelling
* fix duplicate RTE_TAILQ_FOREACH_SAFE
* fix build error due to drivers/net/mlx5/mlx5_flow_meter.c
---
 drivers/bus/auxiliary/private.h            |  1 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h  |  5 ++--
 drivers/bus/dpaa/dpaa_bus.c                |  4 +--
 drivers/bus/fslmc/fslmc_bus.c              |  4 +--
 drivers/bus/fslmc/fslmc_vfio.c             |  9 ++++---
 drivers/bus/ifpga/rte_bus_ifpga.h          |  8 +++---
 drivers/bus/pci/pci_params.c               |  2 ++
 drivers/bus/pci/rte_bus_pci.h              | 13 +++++----
 drivers/bus/pci/windows/pci.c              |  3 +++
 drivers/bus/pci/windows/pci_netuio.c       |  2 ++
 drivers/bus/vdev/rte_bus_vdev.h            |  7 +++--
 drivers/bus/vdev/vdev.c                    |  3 ++-
 drivers/bus/vmbus/rte_bus_vmbus.h          | 13 +++++----
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c         |  2 +-
 drivers/net/bonding/rte_eth_bond_flow.c    |  2 +-
 drivers/net/failsafe/failsafe_flow.c       |  2 +-
 drivers/net/i40e/i40e_ethdev.c             |  9 ++++---
 drivers/net/i40e/i40e_ethdev.h             |  1 +
 drivers/net/i40e/i40e_flow.c               |  6 ++---
 drivers/net/i40e/i40e_hash.c               |  2 +-
 drivers/net/i40e/rte_pmd_i40e.c            |  6 ++---
 drivers/net/iavf/iavf_generic_flow.c       | 14 +++++-----
 drivers/net/ice/ice_dcf_ethdev.c           |  1 +
 drivers/net/ice/ice_ethdev.c               |  4 +--
 drivers/net/ice/ice_generic_flow.c         | 14 +++++-----
 drivers/net/ipn3ke/ipn3ke_flow.c           |  2 +-
 drivers/net/mlx5/mlx5_flow_dv.c            |  2 +-
 drivers/net/mlx5/mlx5_flow_meter.c         |  2 +-
 drivers/net/softnic/rte_eth_softnic_flow.c |  3 ++-
 drivers/net/softnic/rte_eth_softnic_swq.c  |  2 +-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c        |  2 +-
 lib/bbdev/rte_bbdev.h                      |  2 +-
 lib/cryptodev/rte_cryptodev.h              |  2 +-
 lib/cryptodev/rte_cryptodev_pmd.h          |  2 +-
 lib/eal/common/eal_common_devargs.c        |  4 +--
 lib/eal/common/eal_common_log.c            |  1 +
 lib/eal/common/eal_common_options.c        |  2 +-
 lib/eal/common/eal_private.h               |  1 +
 lib/eal/freebsd/include/rte_os.h           | 15 +++++++++++
 lib/eal/include/rte_bus.h                  |  5 ++--
 lib/eal/include/rte_class.h                |  6 ++---
 lib/eal/include/rte_dev.h                  |  5 ++--
 lib/eal/include/rte_devargs.h              |  3 +--
 lib/eal/include/rte_log.h                  |  1 -
 lib/eal/include/rte_service.h              |  1 -
 lib/eal/include/rte_tailq.h                | 15 +++--------
 lib/eal/linux/include/rte_os.h             | 15 +++++++++++
 lib/eal/windows/eal_alarm.c                |  1 +
 lib/eal/windows/include/rte_os.h           | 31 ++++++++++++++++++++++
 lib/efd/rte_efd.c                          |  2 +-
 lib/ethdev/rte_ethdev_core.h               |  2 +-
 lib/hash/rte_fbk_hash.h                    |  1 -
 lib/hash/rte_thash.c                       |  2 ++
 lib/ip_frag/rte_ip_frag.h                  |  4 +--
 lib/mempool/rte_mempool.c                  |  2 +-
 lib/mempool/rte_mempool.h                  |  9 +++----
 lib/pci/rte_pci.h                          |  1 -
 lib/ring/rte_ring_core.h                   |  1 -
 lib/table/rte_swx_table.h                  |  7 ++---
 lib/table/rte_swx_table_selector.h         |  5 ++--
 lib/vhost/iotlb.c                          | 11 ++++----
 lib/vhost/rte_vdpa_dev.h                   |  2 +-
 lib/vhost/vdpa.c                           |  2 +-
 63 files changed, 186 insertions(+), 127 deletions(-)

diff --git a/drivers/bus/auxiliary/private.h b/drivers/bus/auxiliary/private.h
index 9987e8b501..d22e83cf7a 100644
--- a/drivers/bus/auxiliary/private.h
+++ b/drivers/bus/auxiliary/private.h
@@ -7,6 +7,7 @@
 
 #include <stdbool.h>
 #include <stdio.h>
+#include <sys/queue.h>
 
 #include "rte_bus_auxiliary.h"
 
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..b1f5610404 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -113,7 +112,7 @@ typedef int (rte_auxiliary_dma_unmap_t)(struct rte_auxiliary_device *dev,
  * A structure describing an auxiliary device.
  */
 struct rte_auxiliary_device {
-	TAILQ_ENTRY(rte_auxiliary_device) next;   /**< Next probed device. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
 	struct rte_device device;                 /**< Inherit core device */
 	char name[RTE_DEV_NAME_MAX_LEN + 1];      /**< ASCII device name */
 	struct rte_intr_handle intr_handle;       /**< Interrupt handle */
@@ -124,7 +123,7 @@ struct rte_auxiliary_device {
  * A structure describing an auxiliary driver.
  */
 struct rte_auxiliary_driver {
-	TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
 	struct rte_driver driver;             /**< Inherit core driver. */
 	struct rte_auxiliary_bus *bus;        /**< Auxiliary bus reference. */
 	rte_auxiliary_match_t *match;         /**< Device match function. */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..6cab2ae760 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -105,7 +105,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		comp = compare_dpaa_devices(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -245,7 +245,7 @@ dpaa_clean_device_list(void)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..8c8f8a298d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -45,7 +45,7 @@ cleanup_fslmc_device_list(void)
 	struct rte_dpaa2_device *dev;
 	struct rte_dpaa2_device *t_dev;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
 		TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
@@ -82,7 +82,7 @@ insert_in_device_list(struct rte_dpaa2_device *newdev)
 	struct rte_dpaa2_device *dev = NULL;
 	struct rte_dpaa2_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
 		comp = compare_dpaa2_devname(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..852fcfc4dd 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -808,7 +808,8 @@ fslmc_vfio_process_group(void)
 	bool is_dpmcp_in_blocklist = false, is_dpio_in_blocklist = false;
 	int dpmcp_count = 0, dpio_count = 0, current_device;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			dpmcp_count++;
 			if (dev->device.devargs &&
@@ -825,7 +826,8 @@ fslmc_vfio_process_group(void)
 
 	/* Search the MCP as that should be initialized first. */
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			current_device++;
 			if (dev->device.devargs &&
@@ -872,7 +874,8 @@ fslmc_vfio_process_group(void)
 	}
 
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_IO)
 			current_device++;
 		if (dev->device.devargs &&
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..a85e90d384 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -28,9 +28,9 @@ struct rte_afu_device;
 struct rte_afu_driver;
 
 /** Double linked list of Intel FPGA AFU device. */
-TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
+RTE_TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
 /** Double linked list of Intel FPGA AFU device drivers. */
-TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
+RTE_TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
 
 #define IFPGA_BUS_BITSTREAM_PATH_MAX_LEN 256
 
@@ -71,7 +71,7 @@ struct rte_afu_shared {
  * A structure describing a AFU device.
  */
 struct rte_afu_device {
-	TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
+	RTE_TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
 	struct rte_device device;               /**< Inherit core device */
 	struct rte_rawdev *rawdev;    /**< Point Rawdev */
 	struct rte_afu_id id;                   /**< AFU id within FPGA. */
@@ -105,7 +105,7 @@ typedef int (afu_remove_t)(struct rte_afu_device *);
  * A structure describing a AFU device.
  */
 struct rte_afu_driver {
-	TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
+	RTE_TAILQ_ENTRY(rte_afu_driver) next;   /**< Next afu driver. */
 	struct rte_driver driver;               /**< Inherit core driver. */
 	afu_probe_t *probe;                     /**< Device Probe function. */
 	afu_remove_t *remove;                   /**< Device Remove function. */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 3192e9c967..717388753d 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -2,6 +2,8 @@
  * Copyright 2018 Gaëtan Rivet
  */
 
+#include <sys/queue.h>
+
 #include <rte_bus.h>
 #include <rte_bus_pci.h>
 #include <rte_dev.h>
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..673a2850c1 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -37,16 +36,16 @@ struct rte_pci_device;
 struct rte_pci_driver;
 
 /** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+RTE_TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
 /** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+RTE_TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
 
 /* PCI Bus iterators */
 #define FOREACH_DEVICE_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
 
 struct rte_devargs;
 
@@ -64,7 +63,7 @@ enum rte_pci_kernel_driver {
  * A structure describing a PCI device.
  */
 struct rte_pci_device {
-	TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
+	RTE_TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
 	struct rte_device device;           /**< Inherit core device */
 	struct rte_pci_addr addr;           /**< PCI location. */
 	struct rte_pci_id id;               /**< PCI ID. */
@@ -160,7 +159,7 @@ typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
  * A structure describing a PCI driver.
  */
 struct rte_pci_driver {
-	TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
 	struct rte_driver driver;          /**< Inherit core driver. */
 	struct rte_pci_bus *bus;           /**< PCI bus reference. */
 	rte_pci_probe_t *probe;            /**< Device probe function. */
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index d39a7748b8..d7bd5d6e80 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -1,6 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright 2020 Mellanox Technologies, Ltd
  */
+
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/pci/windows/pci_netuio.c b/drivers/bus/pci/windows/pci_netuio.c
index 1bf9133f71..a0b175a8fc 100644
--- a/drivers/bus/pci/windows/pci_netuio.c
+++ b/drivers/bus/pci/windows/pci_netuio.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2020 Intel Corporation.
  */
 
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
index fc315d10fa..2856799953 100644
--- a/drivers/bus/vdev/rte_bus_vdev.h
+++ b/drivers/bus/vdev/rte_bus_vdev.h
@@ -15,12 +15,11 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <rte_dev.h>
 #include <rte_devargs.h>
 
 struct rte_vdev_device {
-	TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
+	RTE_TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
 	struct rte_device device;               /**< Inherit core device */
 };
 
@@ -53,7 +52,7 @@ rte_vdev_device_args(const struct rte_vdev_device *dev)
 }
 
 /** Double linked list of virtual device drivers. */
-TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
+RTE_TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
 
 /**
  * Probe function called for each virtual device driver once.
@@ -107,7 +106,7 @@ typedef int (rte_vdev_dma_unmap_t)(struct rte_vdev_device *dev, void *addr,
  * A virtual device driver abstraction.
  */
 struct rte_vdev_driver {
-	TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
 	struct rte_driver driver;        /**< Inherited general driver. */
 	rte_vdev_probe_t *probe;         /**< Virtual device probe function. */
 	rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 281a2c34e8..a8d8b2327e 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -100,7 +100,8 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 	struct vdev_custom_scan *custom_scan, *tmp_scan;
 
 	rte_spinlock_lock(&vdev_custom_scan_lock);
-	TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next, tmp_scan) {
+	RTE_TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next,
+				tmp_scan) {
 		if (custom_scan->callback != callback ||
 				(custom_scan->user_arg != (void *)-1 &&
 				custom_scan->user_arg != user_arg))
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..6bcff66468 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -20,7 +20,6 @@ extern "C" {
 #include <limits.h>
 #include <stdbool.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -38,15 +37,15 @@ struct rte_vmbus_bus;
 struct vmbus_channel;
 struct vmbus_mon_page;
 
-TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
-TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
+RTE_TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
+RTE_TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
 
 /* VMBus iterators */
 #define FOREACH_DEVICE_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
 
 /** Maximum number of VMBUS resources. */
 enum hv_uio_map {
@@ -62,7 +61,7 @@ enum hv_uio_map {
  * A structure describing a VMBUS device.
  */
 struct rte_vmbus_device {
-	TAILQ_ENTRY(rte_vmbus_device) next;    /**< Next probed VMBUS device */
+	RTE_TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
 	const struct rte_vmbus_driver *driver; /**< Associated driver */
 	struct rte_device device;              /**< Inherit core device */
 	rte_uuid_t device_id;		       /**< VMBUS device id */
@@ -93,7 +92,7 @@ typedef int (vmbus_remove_t)(struct rte_vmbus_device *);
  * A structure describing a VMBUS driver.
  */
 struct rte_vmbus_driver {
-	TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
 	struct rte_driver driver;
 	struct rte_vmbus_bus *bus;          /**< VM bus reference. */
 	vmbus_probe_t *probe;               /**< Device Probe function. */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index dbf85e4eda..ac86b70caf 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -2018,7 +2018,7 @@ bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx)
 	struct ulp_context_list_entry	*entry, *temp;
 
 	rte_spinlock_lock(&bnxt_ulp_ctxt_lock);
-	TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
 		if (entry->ulp_ctx == ulp_ctx) {
 			TAILQ_REMOVE(&ulp_cntx_list, entry, next);
 			rte_free(entry);
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 417f76bf60..65b77faae7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -157,7 +157,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	/* Destroy all bond flows from its slaves instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
-	TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
 		lret = bond_flow_destroy(dev, flow, err);
 		if (unlikely(lret != 0))
 			ret = lret;
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index 5e2b5f7c67..354f9fec20 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -180,7 +180,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
 			return ret;
 		}
 	}
-	TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
 		TAILQ_REMOVE(&PRIV(dev)->flow_list, flow, next);
 		fs_flow_release(&flow);
 	}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..6590363556 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5436,7 +5436,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* VSI has child to attach, release child first */
 	if (vsi->veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5444,7 +5444,8 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 	}
 
 	if (vsi->floating_veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head,
+			list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5452,7 +5453,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* Remove all macvlan filters of the VSI */
 	i40e_vsi_remove_all_macvlan_filter(vsi);
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		rte_free(f);
 
 	if (vsi->type != I40E_VSI_MAIN &&
@@ -6055,7 +6056,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
 	i = 0;
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		mac_filter[i] = f->mac_info;
 		ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr);
 		if (ret) {
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60..374b73e4a7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -6,6 +6,7 @@
 #define _I40E_ETHDEV_H_
 
 #include <stdint.h>
+#include <sys/queue.h>
 
 #include <rte_time.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c..e41a84f1d7 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4917,7 +4917,7 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
 		}
 
 		/* Delete FDIR flows in flow list. */
-		TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 			if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
 				TAILQ_REMOVE(&pf->flow_list, flow, node);
 			}
@@ -4972,7 +4972,7 @@ i40e_flow_flush_ethertype_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete ethertype flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_ETHERTYPE) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
@@ -5000,7 +5000,7 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete tunnel flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_TUNNEL) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfc..6579b1a00b 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -1366,7 +1366,7 @@ i40e_hash_filter_flush(struct i40e_pf *pf)
 {
 	struct rte_flow *flow, *next;
 
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
 		if (flow->filter_type != RTE_ETH_FILTER_HASH)
 			continue;
 
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index 2e34140c5b..ec24046440 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -216,7 +216,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* remove all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		vlan_num = vsi->vlan_num;
 		filter_type = f->mac_info.filter_type;
 		if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -274,7 +274,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* restore all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		if (f->mac_info.filter_type == I40E_MACVLAN_PERFECT_MATCH ||
 		    f->mac_info.filter_type == I40E_MACVLAN_HASH_MATCH) {
 			/**
@@ -563,7 +563,7 @@ rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
 	rte_ether_addr_copy(mac_addr, &vf->mac_addr);
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		if (i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr)
 				!= I40E_SUCCESS)
 			PMD_DRV_LOG(WARNING, "Delete MAC failed");
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 1fe270fb22..b86d99e57d 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -1637,7 +1637,7 @@ iavf_flow_init(struct iavf_adapter *ad)
 	TAILQ_INIT(&vf->dist_parser_list);
 	rte_spinlock_init(&vf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 				     engine->type);
@@ -1663,7 +1663,7 @@ iavf_flow_uninit(struct iavf_adapter *ad)
 	struct iavf_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1733,7 +1733,7 @@ iavf_unregister_parser(struct iavf_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -1917,7 +1917,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -1946,7 +1946,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2089,7 +2089,7 @@ iavf_flow_is_valid(struct rte_flow *flow)
 	void *temp;
 
 	if (flow && flow->engine) {
-		TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 			if (engine == flow->engine)
 				return true;
 		}
@@ -2142,7 +2142,7 @@ iavf_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
 		ret = iavf_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..629e88980d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -4,6 +4,7 @@
 
 #include <errno.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 #include <sys/types.h>
 #include <unistd.h>
 
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..fadd5f2e5a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1104,7 +1104,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (!vsi || !vsi->mac_num)
 		return -EINVAL;
 
-	TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
 		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
@@ -1115,7 +1115,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (vsi->vlan_num == 0)
 		return 0;
 
-	TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
 		ret = ice_remove_vlan_filter(vsi, &v_f->vlan_info.vlan);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 66b5743abf..3e557efe0c 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1820,7 +1820,7 @@ ice_flow_init(struct ice_adapter *ad)
 	TAILQ_INIT(&pf->dist_parser_list);
 	rte_spinlock_init(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 					engine->type);
@@ -1846,7 +1846,7 @@ ice_flow_uninit(struct ice_adapter *ad)
 	struct ice_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1946,7 +1946,7 @@ ice_unregister_parser(struct ice_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -2272,7 +2272,7 @@ ice_parse_engine_create(struct ice_adapter *ad,
 	void *meta = NULL;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		int ret;
 
 		if (parser_node->parser->parse_pattern_action(ad,
@@ -2305,7 +2305,7 @@ ice_parse_engine_validate(struct ice_adapter *ad,
 	struct ice_flow_parser_node *parser_node;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2477,7 +2477,7 @@ ice_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		ret = ice_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
@@ -2541,7 +2541,7 @@ ice_flow_redirect(struct ice_adapter *ad,
 
 	rte_spinlock_lock(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		if (!p_flow->engine->redirect)
 			continue;
 		ret = p_flow->engine->redirect(ad, p_flow, rd);
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index c702e19ea5..f5867ca055 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -1231,7 +1231,7 @@ ipn3ke_flow_flush(struct rte_eth_dev *dev,
 	struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev);
 	struct rte_flow *flow, *temp;
 
-	TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
 		TAILQ_REMOVE(&hw->flow_list, flow, next);
 		rte_free(flow);
 	}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 31d857030f..ba2bf4de37 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -15099,7 +15099,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev,
 		    policy->act_cnt[i].fate_action == MLX5_FLOW_FATE_MTR)
 			next_fm = mlx5_flow_meter_find(priv,
 					policy->act_cnt[i].next_mtr_id, NULL);
-		TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
+		RTE_TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
 				   next_port, tmp) {
 			claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule));
 			tbl = container_of(color_rule->matcher->tbl,
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index a24bd9c7ae..ba4e9fca17 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -2168,7 +2168,7 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
 			priv->mtr_idx_tbl = NULL;
 		}
 	} else {
-		TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
+		RTE_TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
 			fm = &legacy_fm->fm;
 			if (mlx5_flow_meter_params_flush(dev, fm, 0))
 				return -rte_mtr_error_set(error, EINVAL,
diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c
index 27eaf380cd..7d054c38d2 100644
--- a/drivers/net/softnic/rte_eth_softnic_flow.c
+++ b/drivers/net/softnic/rte_eth_softnic_flow.c
@@ -2207,7 +2207,8 @@ pmd_flow_flush(struct rte_eth_dev *dev,
 			void *temp;
 			int status;
 
-			TAILQ_FOREACH_SAFE(flow, &table->flows, node, temp) {
+			RTE_TAILQ_FOREACH_SAFE(flow, &table->flows, node,
+				temp) {
 				/* Rule delete. */
 				status = softnic_pipeline_table_rule_delete
 						(softnic,
diff --git a/drivers/net/softnic/rte_eth_softnic_swq.c b/drivers/net/softnic/rte_eth_softnic_swq.c
index 2083d0a976..afe6f05e29 100644
--- a/drivers/net/softnic/rte_eth_softnic_swq.c
+++ b/drivers/net/softnic/rte_eth_softnic_swq.c
@@ -39,7 +39,7 @@ softnic_softnic_swq_free_keep_rxq_txq(struct pmd_internals *p)
 {
 	struct softnic_swq *swq, *tswq;
 
-	TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
+	RTE_TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
 		if ((strncmp(swq->name, "RXQ", strlen("RXQ")) == 0) ||
 			(strncmp(swq->name, "TXQ", strlen("TXQ")) == 0))
 			continue;
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..7b80370b36 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1606,7 +1606,7 @@ remove_hw_queues_from_list(struct dpaa2_dpdmai_dev *dpdmai_dev)
 
 	DPAA2_QDMA_FUNC_TRACE();
 
-	TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
+	RTE_TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
 		if (queue->dpdmai_dev == dpdmai_dev) {
 			TAILQ_REMOVE(&qdma_queue_list, queue, next);
 			rte_free(queue);
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 7017124414..3ebf62e697 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -434,7 +434,7 @@ struct rte_bbdev_callback;
 struct rte_intr_handle;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+RTE_TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
 
 /**
  * @internal The data structure associated with a device. Drivers can access
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..f86bf2260b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -879,7 +879,7 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 struct rte_cryptodev_callback;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+RTE_TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
 
 /**
  * Structure used to hold information about the callbacks to be called for a
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..9542cbf263 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -66,7 +66,7 @@ struct rte_cryptodev_global {
 
 /* Cryptodev driver, containing the driver ID */
 struct cryptodev_driver {
-	TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
 	const struct rte_driver *driver;
 	uint8_t id;
 };
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index 23aaf8b7e4..2e2f35c47e 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -291,7 +291,7 @@ rte_devargs_insert(struct rte_devargs **da)
 	if (*da == NULL || (*da)->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
 		if (listed_da == *da)
 			/* devargs already in the list */
 			return 0;
@@ -358,7 +358,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
 	if (devargs == NULL || devargs->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
 		if (strcmp(d->bus->name, devargs->bus->name) == 0 &&
 		    strcmp(d->name, devargs->name) == 0) {
 			TAILQ_REMOVE(&devargs_list, d, next);
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index ec8fe23a7f..1be35f5397 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -10,6 +10,7 @@
 #include <errno.h>
 #include <regex.h>
 #include <fnmatch.h>
+#include <sys/queue.h>
 
 #include <rte_eal.h>
 #include <rte_log.h>
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index ff5861b5f3..24f5ceaab0 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -283,7 +283,7 @@ eal_option_device_parse(void)
 	void *tmp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
 		if (ret == 0) {
 			ret = rte_devargs_add(devopt->type, devopt->arg);
 			if (ret)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 64cf4e81c8..86dab1f057 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -8,6 +8,7 @@
 #include <stdbool.h>
 #include <stdint.h>
 #include <stdio.h>
+#include <sys/queue.h>
 
 #include <rte_dev.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
index 627f0483ab..06f30ce238 100644
--- a/lib/eal/freebsd/include/rte_os.h
+++ b/lib/eal/freebsd/include/rte_os.h
@@ -11,6 +11,21 @@
  */
 
 #include <pthread_np.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
+
 
 typedef cpuset_t rte_cpuset_t;
 #define RTE_HAS_CPUSET
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index 80b154fb98..84d364df3f 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -19,13 +19,12 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_log.h>
 #include <rte_dev.h>
 
 /** Double linked list of buses */
-TAILQ_HEAD(rte_bus_list, rte_bus);
+RTE_TAILQ_HEAD(rte_bus_list, rte_bus);
 
 
 /**
@@ -250,7 +249,7 @@ typedef enum rte_iova_mode (*rte_bus_get_iommu_class_t)(void);
  * A structure describing a generic bus.
  */
 struct rte_bus {
-	TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
+	RTE_TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
 	const char *name;            /**< Name of the bus */
 	rte_bus_scan_t scan;         /**< Scan for devices attached to bus */
 	rte_bus_probe_t probe;       /**< Probe devices on bus */
diff --git a/lib/eal/include/rte_class.h b/lib/eal/include/rte_class.h
index 856d09b22d..d560339652 100644
--- a/lib/eal/include/rte_class.h
+++ b/lib/eal/include/rte_class.h
@@ -22,18 +22,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
-
 #include <rte_dev.h>
 
 /** Double linked list of classes */
-TAILQ_HEAD(rte_class_list, rte_class);
+RTE_TAILQ_HEAD(rte_class_list, rte_class);
 
 /**
  * A structure describing a generic device class.
  */
 struct rte_class {
-	TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
+	RTE_TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
 	const char *name; /**< Name of the class */
 	rte_dev_iterate_t dev_iterate; /**< Device iterator. */
 };
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6dd72c11a1..f6efe0c94e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -18,7 +18,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_compat.h>
@@ -75,7 +74,7 @@ struct rte_mem_resource {
  * A structure describing a device driver.
  */
 struct rte_driver {
-	TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
 	const char *name;                   /**< Driver name. */
 	const char *alias;              /**< Driver alias. */
 };
@@ -90,7 +89,7 @@ struct rte_driver {
  * A structure describing a generic device.
  */
 struct rte_device {
-	TAILQ_ENTRY(rte_device) next; /**< Next device */
+	RTE_TAILQ_ENTRY(rte_device) next; /**< Next device */
 	const char *name;             /**< Device name */
 	const struct rte_driver *driver; /**< Driver assigned after probing */
 	const struct rte_bus *bus;    /**< Bus handle assigned on scan */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index cd90944fe8..957477b398 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -21,7 +21,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 #include <rte_compat.h>
 #include <rte_bus.h>
 
@@ -76,7 +75,7 @@ enum rte_devtype {
  */
 struct rte_devargs {
 	/** Next in list. */
-	TAILQ_ENTRY(rte_devargs) next;
+	RTE_TAILQ_ENTRY(rte_devargs) next;
 	/** Type of device. */
 	enum rte_devtype type;
 	/** Device policy. */
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index b706bb8710..bb3523467b 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -21,7 +21,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdarg.h>
 #include <stdbool.h>
-#include <sys/queue.h>
 
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/eal/include/rte_service.h b/lib/eal/include/rte_service.h
index c7d037d862..1c9275c32a 100644
--- a/lib/eal/include/rte_service.h
+++ b/lib/eal/include/rte_service.h
@@ -29,7 +29,6 @@ extern "C" {
 
 #include<stdio.h>
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index b6fe4e5f78..b32033ad66 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -15,17 +15,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <stdio.h>
 #include <rte_debug.h>
 
 /** dummy structure type used by the rte_tailq APIs */
 struct rte_tailq_entry {
-	TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
+	RTE_TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
 	void *data; /**< Pointer to the data referenced by this tailq entry */
 };
 /** dummy */
-TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
+RTE_TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
 
 #define RTE_TAILQ_NAMESIZE 32
 
@@ -48,7 +47,7 @@ struct rte_tailq_elem {
 	 * rte_eal_tailqs_init()
 	 */
 	struct rte_tailq_head *head;
-	TAILQ_ENTRY(rte_tailq_elem) next;
+	RTE_TAILQ_ENTRY(rte_tailq_elem) next;
 	const char name[RTE_TAILQ_NAMESIZE];
 };
 
@@ -125,14 +124,6 @@ RTE_INIT(tailqinitfn_ ##t) \
 		rte_panic("Cannot initialize tailq: %s\n", t.name); \
 }
 
-/* This macro permits both remove and free var within the loop safely.*/
-#ifndef TAILQ_FOREACH_SAFE
-#define TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
-	for ((var) = TAILQ_FIRST((head));			\
-	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1);	\
-	    (var) = (tvar))
-#endif
-
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
index 1618b4df22..ce5b0aed52 100644
--- a/lib/eal/linux/include/rte_os.h
+++ b/lib/eal/linux/include/rte_os.h
@@ -11,6 +11,21 @@
  */
 
 #include <sched.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
+
 
 #ifdef CPU_SETSIZE /* may require _GNU_SOURCE */
 typedef cpu_set_t rte_cpuset_t;
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index e5dc54efb8..103c1f909d 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -4,6 +4,7 @@
 
 #include <stdatomic.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 
 #include <rte_alarm.h>
 #include <rte_spinlock.h>
diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
index 66c711d458..0cbe1dbc1e 100644
--- a/lib/eal/windows/include/rte_os.h
+++ b/lib/eal/windows/include/rte_os.h
@@ -18,6 +18,37 @@
 extern "C" {
 #endif
 
+/* These macros are compatible with bundled sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) \
+struct name { \
+	struct type *tqh_first; /* first element */ \
+	struct type **tqh_last; /* addr of last next element */ \
+}
+#define RTE_TAILQ_ENTRY(type) \
+struct { \
+	struct type *tqe_next; /* next element */ \
+	struct type **tqe_prev; /* address of previous next element */ \
+}
+#define RTE_TAILQ_FOREACH(var, head, field) \
+	for ((var) = RTE_TAILQ_FIRST((head)); \
+	    (var); \
+	    (var) = RTE_TAILQ_NEXT((var), field))
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+	    (var) = (tvar))
+#define RTE_TAILQ_FIRST(head) ((head)->tqh_first)
+#define RTE_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next)
+#define RTE_STAILQ_HEAD(name, type) \
+struct name { \
+	struct type *stqh_first;/* first element */ \
+	struct type **stqh_last;/* addr of last next element */ \
+}
+#define RTE_STAILQ_ENTRY(type) \
+struct { \
+	struct type *stqe_next; /* next element */ \
+}
+
 /* cpu_set macros implementation */
 #define RTE_CPU_AND(dst, src1, src2) CPU_AND(dst, src1, src2)
 #define RTE_CPU_OR(dst, src1, src2) CPU_OR(dst, src1, src2)
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 77f46809f8..5bf517fee9 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -759,7 +759,7 @@ rte_efd_free(struct rte_efd_table *table)
 	efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list);
 	rte_mcfg_tailq_write_lock();
 
-	TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
 		if (te->data == (void *) table) {
 			TAILQ_REMOVE(efd_list, te, next);
 			rte_free(te);
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..d2c9ec42c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -21,7 +21,7 @@
 
 struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
+RTE_TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
 struct rte_eth_dev;
 
diff --git a/lib/hash/rte_fbk_hash.h b/lib/hash/rte_fbk_hash.h
index c4d6976d2b..9c3a61c1d6 100644
--- a/lib/hash/rte_fbk_hash.h
+++ b/lib/hash/rte_fbk_hash.h
@@ -17,7 +17,6 @@
 
 #include <stdint.h>
 #include <errno.h>
-#include <sys/queue.h>
 
 #ifdef __cplusplus
 extern "C" {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index d5a95a6e00..696a1121e2 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2021 Intel Corporation
  */
 
+#include <sys/queue.h>
+
 #include <rte_thash.h>
 #include <rte_tailq.h>
 #include <rte_random.h>
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index 0bfe64b14e..80f931c32a 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -62,7 +62,7 @@ struct ip_frag_key {
  * First two entries in the frags[] array are for the last and first fragments.
  */
 struct ip_frag_pkt {
-	TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
+	RTE_TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
 	struct ip_frag_key key;           /**< fragmentation key */
 	uint64_t             start;       /**< creation timestamp */
 	uint32_t             total_size;  /**< expected reassembled size */
@@ -83,7 +83,7 @@ struct rte_ip_frag_death_row {
 	/**< mbufs to be freed */
 };
 
-TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
+RTE_TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
 
 /** fragmentation table statistics */
 struct ip_frag_tbl_stat {
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 59a588425b..c5f859ae71 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1337,7 +1337,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 
 	rte_mcfg_mempool_read_lock();
 
-	TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
+	RTE_TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
 		(*func)((struct rte_mempool *) te->data, arg);
 	}
 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4235d6f0bf..f57ecbd6fc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -38,7 +38,6 @@
 #include <stdint.h>
 #include <errno.h>
 #include <inttypes.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_spinlock.h>
@@ -141,7 +140,7 @@ struct rte_mempool_objsz {
  * double-frees.
  */
 struct rte_mempool_objhdr {
-	STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;          /**< The mempool owning the object. */
 	rte_iova_t iova;                 /**< IO address of the object. */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -152,7 +151,7 @@ struct rte_mempool_objhdr {
 /**
  * A list of object headers type
  */
-STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
+RTE_STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 
@@ -171,7 +170,7 @@ struct rte_mempool_objtlr {
 /**
  * A list of memory where objects are stored
  */
-STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
+RTE_STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
 
 /**
  * Callback used to free a memory chunk
@@ -186,7 +185,7 @@ typedef void (rte_mempool_memchunk_free_cb_t)(struct rte_mempool_memhdr *memhdr,
  * and physically contiguous.
  */
 struct rte_mempool_memhdr {
-	STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;  /**< The mempool owning the chunk */
 	void *addr;              /**< Virtual address of the chunk */
 	rte_iova_t iova;         /**< IO address of the chunk */
diff --git a/lib/pci/rte_pci.h b/lib/pci/rte_pci.h
index 1f33d687f4..71cbd441c7 100644
--- a/lib/pci/rte_pci.h
+++ b/lib/pci/rte_pci.h
@@ -18,7 +18,6 @@ extern "C" {
 
 #include <stdio.h>
 #include <limits.h>
-#include <sys/queue.h>
 #include <inttypes.h>
 #include <sys/types.h>
 
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 16718ca7f1..43ce1a29d4 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -26,7 +26,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdint.h>
 #include <string.h>
-#include <sys/queue.h>
 #include <errno.h>
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h
index e23f2304c6..f93e5f3f95 100644
--- a/lib/table/rte_swx_table.h
+++ b/lib/table/rte_swx_table.h
@@ -16,7 +16,8 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
+
+#include <rte_os.h>
 
 /** Match type. */
 enum rte_swx_table_match_type {
@@ -68,7 +69,7 @@ struct rte_swx_table_entry {
 	/** Used to facilitate the membership of this table entry to a
 	 * linked list.
 	 */
-	TAILQ_ENTRY(rte_swx_table_entry) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_entry) node;
 
 	/** Key value for the current entry. Array of *key_size* bytes or NULL
 	 * if the *key_size* for the current table is 0.
@@ -111,7 +112,7 @@ struct rte_swx_table_entry {
 };
 
 /** List of table entries. */
-TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
+RTE_TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
 
 /**
  * Table memory footprint get
diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h
index 71b6a74810..62988d2856 100644
--- a/lib/table/rte_swx_table_selector.h
+++ b/lib/table/rte_swx_table_selector.h
@@ -16,7 +16,6 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_compat.h>
 
@@ -56,7 +55,7 @@ struct rte_swx_table_selector_params {
 /** Group member parameters. */
 struct rte_swx_table_selector_member {
 	/** Linked list connectivity. */
-	TAILQ_ENTRY(rte_swx_table_selector_member) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_selector_member) node;
 
 	/** Member ID. */
 	uint32_t member_id;
@@ -66,7 +65,7 @@ struct rte_swx_table_selector_member {
 };
 
 /** List of group members. */
-TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
+RTE_TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
 
 /** Group parameters. */
 struct rte_swx_table_selector_group {
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e0b67721b6..e4a445e709 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -32,7 +32,7 @@ vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_pending_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -100,7 +100,8 @@ vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next,
+				temp_node) {
 		if (node->iova < iova)
 			continue;
 		if (node->iova >= iova + size)
@@ -121,7 +122,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -141,7 +142,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
 
 	entry_idx = rte_rand() % vq->iotlb_cache_nr;
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		if (!entry_idx) {
 			TAILQ_REMOVE(&vq->iotlb_list, node, next);
 			rte_mempool_put(vq->iotlb_pool, node);
@@ -218,7 +219,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		/* Sorted list */
 		if (unlikely(iova + size < node->iova))
 			break;
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h
index bfada387b0..b0f494815f 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/rte_vdpa_dev.h
@@ -71,7 +71,7 @@ struct rte_vdpa_dev_ops {
  * vdpa device structure includes device address and device operations.
  */
 struct rte_vdpa_device {
-	TAILQ_ENTRY(rte_vdpa_device) next;
+	RTE_TAILQ_ENTRY(rte_vdpa_device) next;
 	/** Generic device information */
 	struct rte_device *device;
 	/** vdpa device operations */
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 99a926a772..6dd91859ac 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -115,7 +115,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 	int ret = -1;
 
 	rte_spinlock_lock(&vdpa_device_list_lock);
-	TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
+	RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
 		if (dev != cur_dev)
 			continue;
 
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [PATCH v2 1/6] bbdev: add capability for CRC16 check
  @ 2021-08-19 21:10  4% ` Nicolas Chautru
  0 siblings, 0 replies; 200+ results
From: Nicolas Chautru @ 2021-08-19 21:10 UTC (permalink / raw)
  To: dev, gakhil
  Cc: thomas, trix, hemant.agrawal, mingshan.zhang, arun.joshi,
	Nicolas Chautru

Adding a missing operation when CRC16
is being used for TB CRC check.

Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
 app/test-bbdev/test_bbdev_vector.c     |  2 ++
 doc/guides/prog_guide/bbdev.rst        |  3 +++
 doc/guides/rel_notes/release_21_11.rst |  1 +
 lib/bbdev/rte_bbdev_op.h               | 34 ++++++++++++++++++----------------
 4 files changed, 24 insertions(+), 16 deletions(-)

diff --git a/app/test-bbdev/test_bbdev_vector.c b/app/test-bbdev/test_bbdev_vector.c
index 614dbd1..8d796b1 100644
--- a/app/test-bbdev/test_bbdev_vector.c
+++ b/app/test-bbdev/test_bbdev_vector.c
@@ -167,6 +167,8 @@
 		*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK;
 	else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP"))
 		*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP;
+	else if (!strcmp(token, "RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK"))
+		*op_flag_value = RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK;
 	else if (!strcmp(token, "RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS"))
 		*op_flag_value = RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS;
 	else if (!strcmp(token, "RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE"))
diff --git a/doc/guides/prog_guide/bbdev.rst b/doc/guides/prog_guide/bbdev.rst
index 9619280..8bd7cba 100644
--- a/doc/guides/prog_guide/bbdev.rst
+++ b/doc/guides/prog_guide/bbdev.rst
@@ -891,6 +891,9 @@ given below.
 |RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP                                    |
 | Set to drop the last CRC bits decoding output                      |
 +--------------------------------------------------------------------+
+|RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK                                    |
+| Set for code block CRC-16 checking                                 |
++--------------------------------------------------------------------+
 |RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS                                 |
 | Set for bit-level de-interleaver bypass on input stream            |
 +--------------------------------------------------------------------+
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a55..69dd518 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -84,6 +84,7 @@ API Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* bbdev: Added capability related to more comprehensive CRC options.
 
 ABI Changes
 -----------
diff --git a/lib/bbdev/rte_bbdev_op.h b/lib/bbdev/rte_bbdev_op.h
index f946842..7c44ddd 100644
--- a/lib/bbdev/rte_bbdev_op.h
+++ b/lib/bbdev/rte_bbdev_op.h
@@ -142,51 +142,53 @@ enum rte_bbdev_op_ldpcdec_flag_bitmasks {
 	RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK = (1ULL << 1),
 	/** Set to drop the last CRC bits decoding output */
 	RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP = (1ULL << 2),
+	/** Set for transport block CRC-16 checking */
+	RTE_BBDEV_LDPC_CRC_TYPE_16_CHECK = (1ULL << 3),
 	/** Set for bit-level de-interleaver bypass on Rx stream. */
-	RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 3),
+	RTE_BBDEV_LDPC_DEINTERLEAVER_BYPASS = (1ULL << 4),
 	/** Set for HARQ combined input stream enable. */
-	RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 4),
+	RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE = (1ULL << 5),
 	/** Set for HARQ combined output stream enable. */
-	RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 5),
+	RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE = (1ULL << 6),
 	/** Set for LDPC decoder bypass.
 	 *  RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE must be set.
 	 */
-	RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 6),
+	RTE_BBDEV_LDPC_DECODE_BYPASS = (1ULL << 7),
 	/** Set for soft-output stream enable */
-	RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 7),
+	RTE_BBDEV_LDPC_SOFT_OUT_ENABLE = (1ULL << 8),
 	/** Set for Rate-Matching bypass on soft-out stream. */
-	RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 8),
+	RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS = (1ULL << 9),
 	/** Set for bit-level de-interleaver bypass on soft-output stream. */
-	RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 9),
+	RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS = (1ULL << 10),
 	/** Set for iteration stopping on successful decode condition
 	 *  i.e. a successful syndrome check.
 	 */
-	RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 10),
+	RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE = (1ULL << 11),
 	/** Set if a device supports decoder dequeue interrupts. */
-	RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 11),
+	RTE_BBDEV_LDPC_DEC_INTERRUPTS = (1ULL << 12),
 	/** Set if a device supports scatter-gather functionality. */
-	RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 12),
+	RTE_BBDEV_LDPC_DEC_SCATTER_GATHER = (1ULL << 13),
 	/** Set if a device supports input/output HARQ compression. */
-	RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 13),
+	RTE_BBDEV_LDPC_HARQ_6BIT_COMPRESSION = (1ULL << 14),
 	/** Set if a device supports input LLR compression. */
-	RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 14),
+	RTE_BBDEV_LDPC_LLR_COMPRESSION = (1ULL << 15),
 	/** Set if a device supports HARQ input from
 	 *  device's internal memory.
 	 */
-	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 15),
+	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE = (1ULL << 16),
 	/** Set if a device supports HARQ output to
 	 *  device's internal memory.
 	 */
-	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 16),
+	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE = (1ULL << 17),
 	/** Set if a device supports loop-back access to
 	 *  HARQ internal memory. Intended for troubleshooting.
 	 */
-	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 17),
+	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK = (1ULL << 18),
 	/** Set if a device includes LLR filler bits in the circular buffer
 	 *  for HARQ memory. If not set, it is assumed the filler bits are not
 	 *  in HARQ memory and handled directly by the LDPC decoder.
 	 */
-	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 18)
+	RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS = (1ULL << 19)
 };
 
 /** Flags for LDPC encoder operation and capability structure */
-- 
1.8.3.1


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v14 0/9] eal: Add EAL API for threading
  2021-08-03 19:01  3%     ` [dpdk-dev] [PATCH v13 " Narcisa Ana Maria Vasile
@ 2021-08-19 21:31  3%       ` Narcisa Ana Maria Vasile
  0 siblings, 0 replies; 200+ results
From: Narcisa Ana Maria Vasile @ 2021-08-19 21:31 UTC (permalink / raw)
  To: dev, thomas, dmitry.kozliuk, khot, navasile, dmitrym, roretzla,
	talshn, ocardona
  Cc: bruce.richardson, david.marchand, pallavi.kadam

From: Narcisa Vasile <navasile@microsoft.com>

EAL thread API

**Problem Statement**
DPDK currently uses the pthread interface to create and manage threads.
Windows does not support the POSIX thread programming model,
so it currently
relies on a header file that hides the Windows calls under
pthread matched interfaces. Given that EAL should isolate the environment
specifics from the applications and libraries and mediate
all the communication with the operating systems, a new EAL interface
is needed for thread management.

**Goals**
* Introduce a generic EAL API for threading support that will remove
  the current Windows pthread.h shim.
* Replace references to pthread_* across the DPDK codebase with the new
  RTE_THREAD_* API.
* Allow users to choose between using the RTE_THREAD_* API or a
  3rd party thread library through a configuration option.

**Design plan**
New API main files:
* rte_thread.h (librte_eal/include)
* rte_thread.c (librte_eal/windows)
* rte_thread.c (librte_eal/common)

**A schematic example of the design**
--------------------------------------------------
lib/librte_eal/include/rte_thread.h
int rte_thread_create();

lib/librte_eal/common/rte_thread.c
int rte_thread_create() 
{
	return pthread_create();
}

lib/librte_eal/windows/rte_thread.c
int rte_thread_create() 
{
	return CreateThread();
}
-----------------------------------------------------

**Thread attributes**

When or after a thread is created, specific characteristics of the thread
can be adjusted. Given that the thread characteristics that are of interest
for DPDK applications are affinity and priority, the following structure
that represents thread attributes has been defined:

typedef struct
{
	enum rte_thread_priority priority;
	rte_cpuset_t cpuset;
} rte_thread_attr_t;

The *rte_thread_create()* function can optionally receive
an rte_thread_attr_t
object that will cause the thread to be created with the
affinity and priority
described by the attributes object. If no rte_thread_attr_t is passed
(parameter is NULL), the default affinity and priority are used.
An rte_thread_attr_t object can also be set to the default values
by calling *rte_thread_attr_init()*.

*Priority* is represented through an enum that currently advertises
two values for priority:
	- RTE_THREAD_PRIORITY_NORMAL
	- RTE_THREAD_PRIORITY_REALTIME_CRITICAL
The enum can be extended to allow for multiple priority levels.
rte_thread_set_priority      - sets the priority of a thread
rte_thread_attr_set_priority - updates an rte_thread_attr_t object
                               with a new value for priority

*Affinity* is described by the already known “rte_cpuset_t” type.
rte_thread_attr_set/get_affinity - sets/gets the affinity field in a
                                   rte_thread_attr_t object
rte_thread_set/get_affinity      – sets/gets the affinity of a thread

**Errors**
A translation function that maps Windows error codes to errno-style
error codes is provided. 

**Future work**
The long term plan is for EAL to provide full threading support:
* Add support for conditional variables
* Add support for pthread_mutex_trylock
* Additional functionality offered by pthread_*
  (such as pthread_setname_np, etc.)

v14:
- Remove patch "eal: add EAL argument for setting thread priority"
  This will be added later when enabling the new threading API.
- Remove priority enum value "_UNDEFINED". NORMAL is used
  as the default.
- Fix issue with thread return value.

v13:
 - Fix syntax error in unit tests

v12:
 - Fix freebsd warning about initializer in unit tests

v11:
 - Add unit tests for thread API
 - Rebase

v10:
 - Remove patch no. 10. It will be broken down in subpatches 
   and sent as a different patchset that depends on this one.
   This is done due to the ABI breaks that would be caused by patch 10.
 - Replace unix/rte_thread.c with common/rte_thread.c
 - Remove initializations that may prevent compiler from issuing useful
   warnings.
 - Remove rte_thread_types.h and rte_windows_thread_types.h
 - Remove unneeded priority macros (EAL_THREAD_PRIORITY*)
 - Remove functions that retrieves thread handle from process handle
 - Remove rte_thread_cancel() until same behavior is obtained on
   all platforms.
 - Fix rte_thread_detach() function description,
   return value and remove empty line.
 - Reimplement mutex functions. Add compatible representation for mutex
   identifier. Add macro to replace static mutex initialization instances.
 - Fix commit messages (lines too long, remove unicode symbols)

v9:
- Sign patches

v8:
- Rebase
- Add rte_thread_detach() API
- Set default priority, when user did not specify a value

v7:
Based on DmitryK's review:
- Change thread id representation
- Change mutex id representation
- Implement static mutex inititalizer for Windows
- Change barrier identifier representation
- Improve commit messages
- Add missing doxygen comments
- Split error translation function
- Improve name for affinity function
- Remove cpuset_size parameter
- Fix eal_create_cpu_map function
- Map EAL priority values to OS specific values
- Add thread wrapper for start routine
- Do not export rte_thread_cancel() on Windows
- Cleanup, fix comments, fix typos.

v6:
- improve error-translation function
- call the error translation function in rte_thread_value_get()

v5:
- update cover letter with more details on the priority argument

v4:
- fix function description
- rebase

v3:
- rebase

v2:
- revert changes that break ABI 
- break up changes into smaller patches
- fix coding style issues
- fix issues with errors
- fix parameter type in examples/kni.c

Narcisa Vasile (9):
  eal: add basic threading functions
  eal: add thread attributes
  eal/windows: translate Windows errors to errno-style errors
  eal: implement functions for thread affinity management
  eal: implement thread priority management functions
  eal: add thread lifetime management
  eal: implement functions for mutex management
  eal: implement functions for thread barrier management
  Add unit tests for thread API

 app/test/meson.build            |   2 +
 app/test/test_threads.c         | 419 +++++++++++++++++++++++
 lib/eal/common/meson.build      |   1 +
 lib/eal/common/rte_thread.c     | 441 ++++++++++++++++++++++++
 lib/eal/include/rte_thread.h    | 404 +++++++++++++++++++++-
 lib/eal/unix/meson.build        |   1 -
 lib/eal/unix/rte_thread.c       |  92 -----
 lib/eal/version.map             |  20 ++
 lib/eal/windows/eal_lcore.c     | 176 +++++++---
 lib/eal/windows/eal_windows.h   |  10 +
 lib/eal/windows/include/sched.h |   2 +-
 lib/eal/windows/rte_thread.c    | 584 ++++++++++++++++++++++++++++++--
 12 files changed, 1979 insertions(+), 173 deletions(-)
 create mode 100644 app/test/test_threads.c
 create mode 100644 lib/eal/common/rte_thread.c
 delete mode 100644 lib/eal/unix/rte_thread.c

-- 
2.31.0.vfs.0.1


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v3] ethdev: fix representor port ID search by name
                     ` (2 preceding siblings ...)
  2021-08-18 14:00  3% ` [dpdk-dev] [PATCH v2] " Andrew Rybchenko
@ 2021-08-20 12:18  3% ` Andrew Rybchenko
  3 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-08-20 12:18 UTC (permalink / raw)
  To: Ajit Khaparde, Somnath Kotur, John Daley, Hyong Youb Kim,
	Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang, Matan Azrad,
	Shahaf Shuler, Viacheslav Ovsiienko, Thomas Monjalon,
	Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov

From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>

Getting a list of representors from a representor does not make sense.
Instead, a parent device should be used.

To this end, extend the rte_eth_dev_data structure to include the port ID
of the parent device for representors.

Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
The new field is added into the hole in rte_eth_dev_data structure.
The patch does not change ABI, but extra care is required since ABI
check is disabled for the structure because of the libabigail bug [1].

Potentially it is bad for out-of-tree drivers which implement
representors but do not fill in a new parert_port_id field in
rte_eth_dev_data structure. Do we care?

May be the patch should add lines to release notes, but I'd like
to get initial feedback first.

mlx5 changes should be reviwed by maintainers very carefully, since
we are not sure if we patch it correctly.

[1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060

v3:
    - fix mlx5 build breakage

v2:
    - fix mlx5 review notes
    - try device port ID first before parent in order to address
      backward compatibility issue

 drivers/net/bnxt/bnxt_reps.c             |  1 +
 drivers/net/enic/enic_vf_representor.c   |  1 +
 drivers/net/i40e/i40e_vf_representor.c   |  1 +
 drivers/net/ice/ice_dcf_vf_representor.c |  1 +
 drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
 drivers/net/mlx5/linux/mlx5_os.c         | 17 +++++++++++++++++
 drivers/net/mlx5/windows/mlx5_os.c       | 17 +++++++++++++++++
 lib/ethdev/ethdev_driver.h               |  6 +++---
 lib/ethdev/rte_class_eth.c               | 22 ++++++++++++++++++++--
 lib/ethdev/rte_ethdev.c                  |  8 ++++----
 lib/ethdev/rte_ethdev_core.h             |  4 ++++
 11 files changed, 70 insertions(+), 9 deletions(-)

diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index bdbad53b7d..902591cd39 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = rep_params->vf_id;
+	eth_dev->data->parent_port_id = rep_params->parent_dev->data->port_id;
 
 	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
 	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr,
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index 79dd6e5640..6ee7967ce9 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	eth_dev->data->representor_id = vf->vf_id;
+	eth_dev->data->parent_port_id = pf->port_id;
 	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
 		sizeof(struct rte_ether_addr) *
 		ENIC_UNICAST_PERFECT_FILTERS, 0);
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b55381..865b637585 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
 					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->parent_port_id = pf->dev_data->parent_port_id;
 
 	/* Setting the number queues allocated to the VF */
 	ethdev->data->nb_rx_queues = vf->vsi->nb_qps;
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e9..c7cd3fd290 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
 
 	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	vf_rep_eth_dev->data->representor_id = repr->vf_id;
+	vf_rep_eth_dev->data->parent_port_id = repr->dcf_eth_dev->data->port_id;
 
 	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
 
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a194..7a2063849e 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
 
 	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 	ethdev->data->representor_id = representor->vf_id;
+	ethdev->data->parent_port_id = representor->pf_ethdev->data->port_id;
 
 	/* Set representor device ops */
 	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa48..66d851a97d 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1677,6 +1677,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->parent_port_id =
+					rte_eth_devices[port_id].data->port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS) {
+			DRV_LOG(ERR, "no master device for representor");
+			err = ENODEV;
+			goto error;
+		}
 	}
 	priv->mp_id.port_id = eth_dev->data->port_id;
 	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN);
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 7e1df1c751..5c72c89b5a 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -543,6 +543,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	if (priv->representor) {
 		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
 		eth_dev->data->representor_id = priv->representor_id;
+		MLX5_ETH_FOREACH_DEV(port_id, &priv->pci_dev->device) {
+			struct mlx5_priv *opriv =
+				rte_eth_devices[port_id].data->dev_private;
+			if (opriv &&
+			    opriv->master &&
+			    opriv->domain_id == priv->domain_id &&
+			    opriv->sh == priv->sh) {
+				eth_dev->data->parent_port_id =
+					rte_eth_devices[port_id].data->port_id;
+				break;
+			}
+		}
+		if (port_id >= RTE_MAX_ETHPORTS) {
+			DRV_LOG(ERR, "no master device for representor");
+			err = ENODEV;
+			goto error;
+		}
 	}
 	/*
 	 * Store associated network device interface index. This index
diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h
index 40e474aa7e..b940e6cb38 100644
--- a/lib/ethdev/ethdev_driver.h
+++ b/lib/ethdev/ethdev_driver.h
@@ -1248,8 +1248,8 @@ struct rte_eth_devargs {
  * For backward compatibility, if no representor info, direct
  * map legacy VF (no controller and pf).
  *
- * @param ethdev
- *  Handle of ethdev port.
+ * @param port_id
+ *  Port ID of the backing device.
  * @param type
  *  Representor type.
  * @param controller
@@ -1266,7 +1266,7 @@ struct rte_eth_devargs {
  */
 __rte_internal
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id);
diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c
index 1fe5fa1f36..167d2d798c 100644
--- a/lib/ethdev/rte_class_eth.c
+++ b/lib/ethdev/rte_class_eth.c
@@ -95,14 +95,32 @@ eth_representor_cmp(const char *key __rte_unused,
 		c = i / (np * nf);
 		p = (i / nf) % np;
 		f = i % nf;
-		if (rte_eth_representor_id_get(edev,
+		/*
+		 * rte_eth_representor_id_get expects to receive port ID of
+		 * the master device, but in order to maintain compatibility
+		 * with mlx5's hardware bonding and legacy representor
+		 * specification using just VF numbers, the representor's port
+		 * ID is tried first.
+		 */
+		ret = rte_eth_representor_id_get(edev->data->port_id,
 			eth_da.type,
 			eth_da.nb_mh_controllers == 0 ? -1 :
 					eth_da.mh_controllers[c],
 			eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
 			eth_da.nb_representor_ports == 0 ? -1 :
 					eth_da.representor_ports[f],
-			&id) < 0)
+			&id);
+		if (ret == -ENOTSUP)
+			ret = rte_eth_representor_id_get(
+				edev->data->parent_port_id,
+				eth_da.type,
+				eth_da.nb_mh_controllers == 0 ? -1 :
+						eth_da.mh_controllers[c],
+				eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
+				eth_da.nb_representor_ports == 0 ? -1 :
+						eth_da.representor_ports[f],
+				&id);
+		if (ret < 0)
 			continue;
 		if (data->representor_id == id)
 			return 0;
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1..228ef7bf23 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -5997,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)
 }
 
 int
-rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
+rte_eth_representor_id_get(uint16_t port_id,
 			   enum rte_eth_representor_type type,
 			   int controller, int pf, int representor_port,
 			   uint16_t *repr_id)
@@ -6013,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 		return -EINVAL;
 
 	/* Get PMD representor range info. */
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
+	ret = rte_eth_representor_info_get(port_id, NULL);
 	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
 	    controller == -1 && pf == -1) {
 		/* Direct mapping for legacy VF representor. */
@@ -6028,7 +6028,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 	if (info == NULL)
 		return -ENOMEM;
 	info->nb_ranges_alloc = n;
-	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
+	ret = rte_eth_representor_info_get(port_id, info);
 	if (ret < 0)
 		goto out;
 
@@ -6047,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
 			continue;
 		if (info->ranges[i].id_end < info->ranges[i].id_base) {
 			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
-				ethdev->data->port_id, info->ranges[i].id_base,
+				port_id, info->ranges[i].id_base,
 				info->ranges[i].id_end, i);
 			continue;
 
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..13cb84b52f 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -185,6 +185,10 @@ struct rte_eth_dev_data {
 			/**< Switch-specific identifier.
 			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
 			 */
+	uint16_t parent_port_id;
+			/**< Port ID of the backing device.
+			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
+			 */
 
 	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
 	uint64_t reserved_64s[4]; /**< Reserved for future fields */
-- 
2.30.2


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [RFC 0/7] hide eth dev related structures
@ 2021-08-20 16:28  3% Konstantin Ananyev
  2021-08-20 16:28  2% ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
  2021-08-26 12:37  3% ` [dpdk-dev] [RFC 0/7] hide eth dev related structures Jerin Jacob
  0 siblings, 2 replies; 200+ results
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

NOTE: This is just an RFC to start further discussion and collect the feedback.
Due to significant amount of work, changes required are applied only to two
PMDs so far: net/i40e and net/ice.
So to build it you'll need to add:
-Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
to your config options. 

The aim of these patch series is to make rte_ethdev core data structures
(rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback, etc.) internal to DPDK
and not visible to the user.
That should allow future possible changes to core ethdev related structures
to be transparent to the user and help to improve ABI/API stability.
Note that current ethdev API is preserved, though it is an ABI break for sure.

The work is based on previous discussion at:
https://www.mail-archive.com/dev@dpdk.org/msg211405.html
and consists of the following main points:
1. Move public 'fast' function pointers (rx_pkt_burst(), etc.) from
   rte_eth_dev into a separate flat array. We keep it public to still be
   able to use inline functions for these 'fast' calls
   (like rte_eth_rx_burst(), etc.) to avoid/minimize slowdown.
2. Change prototype within PMDs for these 'fast' functions
   (pkt_rx_burst(), etc.) to accept pair of <port_id, queue_id>
   instead of queue pointer.
3. Also some mechanical changes in function start/finish code is required.
   Basically to avoid extra level of indirection - PMD required to do some
   preliminary checks and data retrieval that are currently done at user level
   by inline rte_eth* functions. 
4. Special _rte_eth_*_prolog(/epilog) inline functions and helper macros
   are provided to make these changes inside PMDs as straightforward
   as possible.
5. Change implementation of 'fast' ethdev functions (rte_eth_rx_burst(), etc.)
   to use new public flat array. 
6. Move rte_eth_dev, rte_eth_dev_data, rte_eth_rxtx_callback and related things
   into internal header: <ethdev_driver.h>.

That approach was selected to avoid(/minimize) possible performance losses.
 
So far I done only limited amount functional and performance testing.
Didn't spot any functional problems, and performance numbers
remains the same before and after the patch on my box (testpmd, macswap fwd).

Remaining items:
==============
- implement required changes for all PMD at drivers/net.
  So far I done changes only for two drivers, and definitely would use some
  help from other PMD maintainers. Required changes are mechanical,
  but we have a lot of drivers these days.
- <rte_bus_pci.h> contains reference to rte_eth_dev field
  RTE_ETH_DEV_TO_PCI(eth_dev).
  Need to move this macro into some internal header.
- Extra testing
- checkpatch warnings
- docs update

Konstantin Ananyev (7):
  eth: move ethdev 'burst' API into separate structure
  eth: make drivers to use new API for Rx
  eth: make drivers to use new API for Tx
  eth: make drivers to use new API for Tx prepare
  eth: make drivers to use new API to obtain descriptor status
  eth: make drivers to use new API for Rx queue count
  eth: hide eth dev related structures

 app/test-pmd/config.c                         |  23 +-
 app/test/virtual_pmd.c                        |  27 +-
 drivers/common/octeontx2/otx2_sec_idev.c      |   2 +-
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  15 +-
 drivers/net/i40e/i40e_ethdev_vf.c             |  15 +-
 drivers/net/i40e/i40e_rxtx.c                  | 243 ++++---
 drivers/net/i40e/i40e_rxtx.h                  |  68 +-
 drivers/net/i40e/i40e_rxtx_vec_avx2.c         |  11 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |  12 +-
 drivers/net/i40e/i40e_rxtx_vec_sse.c          |   8 +-
 drivers/net/i40e/i40e_vf_representor.c        |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  10 +-
 drivers/net/ice/ice_dcf_vf_representor.c      |  10 +-
 drivers/net/ice/ice_ethdev.c                  |  15 +-
 drivers/net/ice/ice_rxtx.c                    | 236 ++++---
 drivers/net/ice/ice_rxtx.h                    |  73 +--
 drivers/net/ice/ice_rxtx_vec_avx2.c           |  24 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c         |  24 +-
 drivers/net/ice/ice_rxtx_vec_common.h         |   7 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            |  12 +-
 lib/ethdev/ethdev_driver.h                    | 601 ++++++++++++++++++
 lib/ethdev/ethdev_private.c                   |  74 +++
 lib/ethdev/ethdev_private.h                   |   3 +
 lib/ethdev/rte_ethdev.c                       | 176 ++++-
 lib/ethdev/rte_ethdev.h                       | 194 ++----
 lib/ethdev/rte_ethdev_core.h                  | 182 ++----
 lib/ethdev/version.map                        |  16 +
 lib/eventdev/rte_event_eth_rx_adapter.c       |   2 +-
 lib/eventdev/rte_event_eth_tx_adapter.c       |   2 +-
 lib/eventdev/rte_eventdev.c                   |   2 +-
 31 files changed, 1488 insertions(+), 611 deletions(-)

-- 
2.26.3


^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure
  2021-08-20 16:28  3% [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
@ 2021-08-20 16:28  2% ` Konstantin Ananyev
  2021-08-26 12:37  3% ` [dpdk-dev] [RFC 0/7] hide eth dev related structures Jerin Jacob
  1 sibling, 0 replies; 200+ results
From: Konstantin Ananyev @ 2021-08-20 16:28 UTC (permalink / raw)
  To: dev
  Cc: thomas, ferruh.yigit, andrew.rybchenko, qiming.yang, qi.z.zhang,
	beilei.xing, techboard, Konstantin Ananyev

Move public function pointers (rx_pkt_burst(), etc.) from rte_eth_dev
into a separate flat array. We can keep it public to still use inline
functions for 'fast' calls (like rte_eth_rx_burst(), etc.) to
avoid/minimize slowdown.
The intention is to make rte_eth_dev and related structures internal.
That should allow future possible changes to core eth_dev strcutures
to be transaprent to the user and help to avoid ABI/API breakages.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/ethdev_private.c  | 74 ++++++++++++++++++++++++++++++++++++
 lib/ethdev/ethdev_private.h  |  3 ++
 lib/ethdev/rte_ethdev.c      | 12 ++++++
 lib/ethdev/rte_ethdev_core.h | 33 ++++++++++++++++
 4 files changed, 122 insertions(+)

diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c
index 012cf73ca2..1ab64d24cf 100644
--- a/lib/ethdev/ethdev_private.c
+++ b/lib/ethdev/ethdev_private.c
@@ -174,3 +174,77 @@ rte_eth_devargs_parse_representor_ports(char *str, void *data)
 		RTE_LOG(ERR, EAL, "wrong representor format: %s\n", str);
 	return str == NULL ? -1 : 0;
 }
+
+static uint16_t
+dummy_eth_rx_burst(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id,
+		__rte_unused struct rte_mbuf **rx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_LOG(ERR, EAL, "rx_pkt_burst for unconfigured port %u\n", port_id);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_burst(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_LOG(ERR, EAL, "tx_pkt_burst for unconfigured port %u\n", port_id);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static uint16_t
+dummy_eth_tx_prepare(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id,
+		__rte_unused struct rte_mbuf **tx_pkts,
+		__rte_unused uint16_t nb_pkts)
+{
+	RTE_LOG(ERR, EAL, "tx_pkt_prepare for unconfigured port %u\n", port_id);
+	rte_errno = ENOTSUP;
+	return 0;
+}
+
+static int
+dummy_eth_rx_queue_count(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id)
+{
+	RTE_LOG(ERR, EAL, "rx_queue_count for unconfigured port %u\n", port_id);
+	return -ENOTSUP;
+}
+
+static int
+dummy_eth_rx_descriptor_status(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id, __rte_unused uint16_t offset)
+{
+	RTE_LOG(ERR, EAL, "rx_descriptor_status for unconfigured port %u\n",
+		port_id);
+	return -ENOTSUP;
+}
+
+static int
+dummy_eth_tx_descriptor_status(__rte_unused uint16_t port_id,
+		__rte_unused uint16_t queue_id, __rte_unused uint16_t offset)
+{
+	RTE_LOG(ERR, EAL, "tx_descriptor_status for unconfigured port %u\n",
+		port_id);
+	return -ENOTSUP;
+}
+
+void
+rte_eth_dev_burst_api_reset(struct rte_eth_burst_api *rba)
+{
+	static const struct rte_eth_burst_api dummy = {
+		.rx_pkt_burst = dummy_eth_rx_burst,
+		.tx_pkt_burst = dummy_eth_tx_burst,
+		.tx_pkt_prepare = dummy_eth_tx_prepare,
+		.rx_queue_count = dummy_eth_rx_queue_count,
+		.rx_descriptor_status = dummy_eth_rx_descriptor_status,
+		.tx_descriptor_status = dummy_eth_tx_descriptor_status,
+	};
+
+	*rba = dummy;
+}
diff --git a/lib/ethdev/ethdev_private.h b/lib/ethdev/ethdev_private.h
index 9bb0879538..b9b0e6755a 100644
--- a/lib/ethdev/ethdev_private.h
+++ b/lib/ethdev/ethdev_private.h
@@ -30,6 +30,9 @@ eth_find_device(const struct rte_eth_dev *_start, rte_eth_cmp_t cmp,
 /* Parse devargs value for representor parameter. */
 int rte_eth_devargs_parse_representor_ports(char *str, void *data);
 
+/* reset eth 'burst' API to dummy values */
+void rte_eth_dev_burst_api_reset(struct rte_eth_burst_api *rba);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1..949292a617 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -44,6 +44,9 @@
 static const char *MZ_RTE_ETH_DEV_DATA = "rte_eth_dev_data";
 struct rte_eth_dev rte_eth_devices[RTE_MAX_ETHPORTS];
 
+/* public 'fast/burst' API */
+struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
+
 /* spinlock for eth device callbacks */
 static rte_spinlock_t eth_dev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
@@ -1336,6 +1339,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	int diag;
 	int ret;
 	uint16_t old_mtu;
+	struct rte_eth_burst_api rba;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
@@ -1363,6 +1367,9 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 */
 	dev->data->dev_configured = 0;
 
+	rba = rte_eth_burst_api[port_id];
+	rte_eth_dev_burst_api_reset(&rte_eth_burst_api[port_id]);
+
 	 /* Store original config, as rollback required on failure */
 	memcpy(&orig_conf, &dev->data->dev_conf, sizeof(dev->data->dev_conf));
 
@@ -1623,6 +1630,8 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	if (old_mtu != dev->data->mtu)
 		dev->data->mtu = old_mtu;
 
+	rte_eth_burst_api[port_id] = rba;
+
 	rte_ethdev_trace_configure(port_id, nb_rx_q, nb_tx_q, dev_conf, ret);
 	return ret;
 }
@@ -1871,6 +1880,7 @@ rte_eth_dev_close(uint16_t port_id)
 	dev = &rte_eth_devices[port_id];
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);
+	rte_eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
 	*lasterr = (*dev->dev_ops->dev_close)(dev);
 	if (*lasterr != 0)
 		lasterr = &binerr;
@@ -1892,6 +1902,8 @@ rte_eth_dev_reset(uint16_t port_id)
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_reset, -ENOTSUP);
 
+	rte_eth_dev_burst_api_reset(rte_eth_burst_api + port_id);
+
 	ret = rte_eth_dev_stop(port_id);
 	if (ret != 0) {
 		RTE_ETHDEV_LOG(ERR,
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..fb8526cb9f 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -25,21 +25,31 @@ TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
 struct rte_eth_dev;
 
+typedef uint16_t (*rte_eth_rx_burst_t)(uint16_t port_id, uint16_t queue_id,
+			struct rte_mbuf **rx_pkts, uint16_t nb_pkts);
+
 typedef uint16_t (*eth_rx_burst_t)(void *rxq,
 				   struct rte_mbuf **rx_pkts,
 				   uint16_t nb_pkts);
 /**< @internal Retrieve input packets from a receive queue of an Ethernet device. */
 
+typedef uint16_t (*rte_eth_tx_burst_t)(uint16_t port_id, uint16_t queue_id,
+			struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
 typedef uint16_t (*eth_tx_burst_t)(void *txq,
 				   struct rte_mbuf **tx_pkts,
 				   uint16_t nb_pkts);
 /**< @internal Send output packets on a transmit queue of an Ethernet device. */
 
+typedef uint16_t (*rte_eth_tx_prep_t)(uint16_t port_id, uint16_t queue_id,
+			struct rte_mbuf **tx_pkts, uint16_t nb_pkts);
+
 typedef uint16_t (*eth_tx_prep_t)(void *txq,
 				   struct rte_mbuf **tx_pkts,
 				   uint16_t nb_pkts);
 /**< @internal Prepare output packets on a transmit queue of an Ethernet device. */
 
+typedef int (*rte_eth_rx_queue_count_t)(uint16_t port_id, uint16_t queue_id);
 
 typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
 					 uint16_t rx_queue_id);
@@ -48,12 +58,35 @@ typedef uint32_t (*eth_rx_queue_count_t)(struct rte_eth_dev *dev,
 typedef int (*eth_rx_descriptor_done_t)(void *rxq, uint16_t offset);
 /**< @internal Check DD bit of specific RX descriptor */
 
+typedef int (*rte_eth_rx_descriptor_status_t)(uint16_t port_id,
+			uint16_t queue_id, uint16_t offset);
+
 typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset);
 /**< @internal Check the status of a Rx descriptor */
 
+typedef int (*rte_eth_tx_descriptor_status_t)(uint16_t port_id,
+			uint16_t queue_id, uint16_t offset);
+
 typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset);
 /**< @internal Check the status of a Tx descriptor */
 
+struct rte_eth_burst_api {
+	rte_eth_rx_burst_t rx_pkt_burst;
+	/**< PMD receive function. */
+	rte_eth_tx_burst_t tx_pkt_burst;
+	/**< PMD transmit function. */
+	rte_eth_tx_prep_t tx_pkt_prepare;
+	/**< PMD transmit prepare function. */
+	rte_eth_rx_queue_count_t rx_queue_count;
+	/**< Get the number of used RX descriptors. */
+	rte_eth_rx_descriptor_status_t rx_descriptor_status;
+	/**< Check the status of a Rx descriptor. */
+	rte_eth_tx_descriptor_status_t tx_descriptor_status;
+	/**< Check the status of a Tx descriptor. */
+	uintptr_t reserved[2];
+} __rte_cache_min_aligned;
+
+extern struct rte_eth_burst_api rte_eth_burst_api[RTE_MAX_ETHPORTS];
 
 /**
  * @internal
-- 
2.26.3


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH 21.11 v2 0/3] octeontx build only on 64-bit Linux
@ 2021-08-21 14:07  0% Pavan Nikhilesh Bhagavatula
  0 siblings, 0 replies; 200+ results
From: Pavan Nikhilesh Bhagavatula @ 2021-08-21 14:07 UTC (permalink / raw)
  To: David Marchand; +Cc: dev, Thomas Monjalon, Jerin Jacob Kollanukkaran

>On Thu, Mar 25, 2021 at 3:52 PM Thomas Monjalon
><thomas@monjalon.net> wrote:
>>
>> This is a reorg of the patches from Pavan.
>> It has been discussed that it should wait for DPDK 21.11
>> for ABI compatibility reason.
>>
>> Pavan Nikhilesh (3):
>>   net/thunderx: enable build only on 64-bit Linux
>>   common/octeontx: enable build only on 64-bit Linux
>>   common/octeontx2: enable build only on 64-bit Linux
>>
>>  drivers/common/octeontx/meson.build   |  6 ++++++
>>  drivers/common/octeontx2/meson.build  |  4 ++--
>>  drivers/compress/octeontx/meson.build |  6 ++++++
>>  drivers/crypto/octeontx/meson.build   |  7 +++++--
>>  drivers/event/octeontx/meson.build    |  6 ++++++
>>  drivers/event/octeontx2/meson.build   |  4 ++--
>>  drivers/mempool/octeontx/meson.build  |  5 +++--
>>  drivers/mempool/octeontx2/meson.build |  9 ++-------
>>  drivers/net/octeontx/meson.build      |  4 ++--
>>  drivers/net/octeontx2/meson.build     | 10 ++--------
>>  drivers/net/thunderx/meson.build      |  4 ++--
>>  drivers/raw/octeontx2_dma/meson.build | 10 ++++++----
>>  12 files changed, 44 insertions(+), 31 deletions(-)
>
>There were a couple of cleanups (indent etc..) and changes in meson
>files.
>This series does not apply cleanly on the main branch.
>Could you rebase it?

Ack l will rebase it.

>
>I noticed that the net/cnxk driver does not have this check, but it is
>disabled anyway since it depends on common/cnxk.
>Is it worth adding the check for consistency?

Sure I will add checks to cnxk driver family.

Thanks,
Pavan.

>
>
>--
>David Marchand


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: abstract the behaviour of rte_ctrl_thread_create
  @ 2021-08-23  9:40  3%     ` Olivier Matz
  2021-08-23 21:18  0%       ` Honnappa Nagarahalli
  0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2021-08-23  9:40 UTC (permalink / raw)
  To: Honnappa Nagarahalli
  Cc: thomas, dev, lucp.at.work, david.marchand, Ruifeng Wang, nd

Hi Honnappa,

Back from holidays, sorry for the late answer.

On Mon, Aug 09, 2021 at 01:18:42PM +0000, Honnappa Nagarahalli wrote:
> <snip>
> > 
> > 30/07/2021 23:44, Honnappa Nagarahalli:
> > > The current expected behaviour of the function rte_ctrl_thread_create
> > > is rigid which makes the implementation of the function complex.
> > > Make the expected behaviour abstract to allow for simplified
> > > implementation.
> > >
> > > With this change, the calls to pthread_setaffinity_np can be moved to
> > > the control thread. This will avoid the use of pthread_barrier_wait
> > > and simplify the synchronization mechanism between
> > > rte_ctrl_thread_create and the calling thread.
> > >
> > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > ---
> > > +* eal: The expected behaviour of the function
> > > +``rte_ctrl_thread_create``
> > > +  abstracted to allow for simplified implementation. The new
> > > +behaviour is
> > > +  as follows:
> > > +  Creates a control thread with the given name. The affinity of the
> > > +new
> > > +  thread is based on the CPU affinity retrieved at the time
> > > +rte_eal_init()
> > > +  was called, the dataplane and service lcores are then excluded.
> > 
> > I don't understand what is different of the current API:
> >  * Wrapper to pthread_create(), pthread_setname_np() and
> >  * pthread_setaffinity_np(). The affinity of the new thread is based
> >  * on the CPU affinity retrieved at the time rte_eal_init() was called,
> >  * the dataplane and service lcores are then excluded.
> My concern is for the word "Wrapper". I am not sure how much we are bound by that to keep the code as a "wrapper".
> The new patch does not change the high level behavior.

I am ok to remove the word "wrapper" from the description, and I agree
it can be better described without quoting the pthread_* functions.

> Are you saying you are ok with the patch without the deprecation notice?

I don't think it requires a deprecation notice if the API and ABI is
left unchanged. To be honnest, I find a bit hard to understand what is
really changed by reading the deprecation notice:

> +* eal: The expected behaviour of the function ``rte_ctrl_thread_create``
> +  abstracted to allow for simplified implementation. The new behaviour is
> +  as follows:
> +  Creates a control thread with the given name. The affinity of the new
> +  thread is based on the CPU affinity retrieved at the time rte_eal_init()
> +  was called, the dataplane and service lcores are then excluded.

I'll send my comments to your patch:
http://patches.dpdk.org/project/dpdk/patch/20210802051652.3611-1-honnappa.nagarahalli@arm.com/


Thanks,
Olivier

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v8] eal: remove sys/queue.h from public headers.
  2021-08-18 23:26  1%         ` [dpdk-dev] [PATCH v7] " William Tu
@ 2021-08-23 13:03  1%           ` William Tu
  2021-08-24 16:21  1%             ` [dpdk-dev] [PATCH v9] " William Tu
  0 siblings, 1 reply; 200+ results
From: William Tu @ 2021-08-23 13:03 UTC (permalink / raw)
  To: dev; +Cc: Dmitry.Kozliuk, Nick Connolly

Currently there are some public headers that include 'sys/queue.h', which
is not POSIX, but usually provided by the Linux/BSD system library.
(Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
The file is missing on Windows. During the Windows build, DPDK uses a
bundled copy, so building a DPDK library works fine.  But when OVS or other
applications use DPDK as a library, because some DPDK public headers
include 'sys/queue.h', on Windows, it triggers an error due to no such
file.

One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
Windows environment, such as [1]. However, this means DPDK exports the
functionalities of 'sys/queue.h' into the environment, which might cause
symbols, macros, headers clashing with other applications.

The patch fixes it by removing the "#include <sys/queue.h>" from
DPDK public headers, so programs including DPDK headers don't depend
on the system to provide 'sys/queue.h'. When these public headers use
macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
in Windows EAL. Note that these RTE_ macros are compatible with
<sys/queue.h>, both at the level of API (to use with <sys/queue.h>
macros in C files) and ABI (to avoid breaking it).

Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
the patch replaces it with RTE_TAILQ_FOREACH_SAFE.

[1] http://mails.dpdk.org/archives/dev/2021-August/216304.html

Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Signed-off-by: William Tu <u9012063@gmail.com>
---
v7-v8:
* remove duplicate RTE_TAILQ_FOREACH_SAFE at rte_os.h
  put the macro at rte_tailq.h
* remove inline comments
* diff
  https://github.com/williamtu/dpdk/compare/a4144ff11b..6cb7cd8daf
v6-v7:
* remove some redundant "#incldue <sys/queue.h>"
* remove extra newline, add comment at rte_os.h for windows
  use of bundled sys/queue

v5-v6:
* fix tab/indent issue, fix type and spelling
* fix duplicate RTE_TAILQ_FOREACH_SAFE
* fix build error due to drivers/net/mlx5/mlx5_flow_meter.c
---

 drivers/bus/auxiliary/private.h            |  1 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h  |  5 ++--
 drivers/bus/dpaa/dpaa_bus.c                |  4 ++--
 drivers/bus/fslmc/fslmc_bus.c              |  4 ++--
 drivers/bus/fslmc/fslmc_vfio.c             |  9 +++++---
 drivers/bus/ifpga/rte_bus_ifpga.h          |  8 +++----
 drivers/bus/pci/pci_params.c               |  2 ++
 drivers/bus/pci/rte_bus_pci.h              | 13 +++++------
 drivers/bus/pci/windows/pci.c              |  3 +++
 drivers/bus/pci/windows/pci_netuio.c       |  2 ++
 drivers/bus/vdev/rte_bus_vdev.h            |  7 +++---
 drivers/bus/vdev/vdev.c                    |  3 ++-
 drivers/bus/vmbus/rte_bus_vmbus.h          | 13 +++++------
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c         |  2 +-
 drivers/net/bonding/rte_eth_bond_flow.c    |  2 +-
 drivers/net/failsafe/failsafe_flow.c       |  2 +-
 drivers/net/i40e/i40e_ethdev.c             |  9 ++++----
 drivers/net/i40e/i40e_ethdev.h             |  1 +
 drivers/net/i40e/i40e_flow.c               |  6 ++---
 drivers/net/i40e/i40e_hash.c               |  2 +-
 drivers/net/i40e/rte_pmd_i40e.c            |  6 ++---
 drivers/net/iavf/iavf_generic_flow.c       | 14 +++++------
 drivers/net/ice/ice_dcf_ethdev.c           |  1 +
 drivers/net/ice/ice_ethdev.c               |  4 ++--
 drivers/net/ice/ice_generic_flow.c         | 14 +++++------
 drivers/net/ipn3ke/ipn3ke_flow.c           |  2 +-
 drivers/net/mlx5/mlx5_flow_dv.c            |  2 +-
 drivers/net/mlx5/mlx5_flow_meter.c         |  2 +-
 drivers/net/softnic/rte_eth_softnic_flow.c |  3 ++-
 drivers/net/softnic/rte_eth_softnic_swq.c  |  2 +-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c        |  2 +-
 lib/bbdev/rte_bbdev.h                      |  2 +-
 lib/cryptodev/rte_cryptodev.h              |  2 +-
 lib/cryptodev/rte_cryptodev_pmd.h          |  2 +-
 lib/eal/common/eal_common_devargs.c        |  4 ++--
 lib/eal/common/eal_common_log.c            |  1 +
 lib/eal/common/eal_common_options.c        |  2 +-
 lib/eal/common/eal_private.h               |  1 +
 lib/eal/freebsd/include/rte_os.h           | 10 ++++++++
 lib/eal/include/rte_bus.h                  |  5 ++--
 lib/eal/include/rte_class.h                |  6 ++---
 lib/eal/include/rte_dev.h                  |  5 ++--
 lib/eal/include/rte_devargs.h              |  3 +--
 lib/eal/include/rte_log.h                  |  1 -
 lib/eal/include/rte_service.h              |  1 -
 lib/eal/include/rte_tailq.h                | 15 ++++++------
 lib/eal/linux/include/rte_os.h             | 10 ++++++++
 lib/eal/windows/eal_alarm.c                |  1 +
 lib/eal/windows/include/rte_os.h           | 27 ++++++++++++++++++++++
 lib/efd/rte_efd.c                          |  2 +-
 lib/ethdev/rte_ethdev_core.h               |  2 +-
 lib/hash/rte_fbk_hash.h                    |  1 -
 lib/hash/rte_thash.c                       |  2 ++
 lib/ip_frag/rte_ip_frag.h                  |  4 ++--
 lib/mempool/rte_mempool.c                  |  2 +-
 lib/mempool/rte_mempool.h                  |  9 ++++----
 lib/pci/rte_pci.h                          |  1 -
 lib/ring/rte_ring_core.h                   |  1 -
 lib/table/rte_swx_table.h                  |  7 +++---
 lib/table/rte_swx_table_selector.h         |  5 ++--
 lib/vhost/iotlb.c                          | 11 +++++----
 lib/vhost/rte_vdpa_dev.h                   |  2 +-
 lib/vhost/vdpa.c                           |  2 +-
 63 files changed, 176 insertions(+), 123 deletions(-)

diff --git a/drivers/bus/auxiliary/private.h b/drivers/bus/auxiliary/private.h
index 9987e8b501..d22e83cf7a 100644
--- a/drivers/bus/auxiliary/private.h
+++ b/drivers/bus/auxiliary/private.h
@@ -7,6 +7,7 @@
 
 #include <stdbool.h>
 #include <stdio.h>
+#include <sys/queue.h>
 
 #include "rte_bus_auxiliary.h"
 
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..b1f5610404 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -113,7 +112,7 @@ typedef int (rte_auxiliary_dma_unmap_t)(struct rte_auxiliary_device *dev,
  * A structure describing an auxiliary device.
  */
 struct rte_auxiliary_device {
-	TAILQ_ENTRY(rte_auxiliary_device) next;   /**< Next probed device. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
 	struct rte_device device;                 /**< Inherit core device */
 	char name[RTE_DEV_NAME_MAX_LEN + 1];      /**< ASCII device name */
 	struct rte_intr_handle intr_handle;       /**< Interrupt handle */
@@ -124,7 +123,7 @@ struct rte_auxiliary_device {
  * A structure describing an auxiliary driver.
  */
 struct rte_auxiliary_driver {
-	TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
 	struct rte_driver driver;             /**< Inherit core driver. */
 	struct rte_auxiliary_bus *bus;        /**< Auxiliary bus reference. */
 	rte_auxiliary_match_t *match;         /**< Device match function. */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..6cab2ae760 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -105,7 +105,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		comp = compare_dpaa_devices(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -245,7 +245,7 @@ dpaa_clean_device_list(void)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..8c8f8a298d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -45,7 +45,7 @@ cleanup_fslmc_device_list(void)
 	struct rte_dpaa2_device *dev;
 	struct rte_dpaa2_device *t_dev;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
 		TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
@@ -82,7 +82,7 @@ insert_in_device_list(struct rte_dpaa2_device *newdev)
 	struct rte_dpaa2_device *dev = NULL;
 	struct rte_dpaa2_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
 		comp = compare_dpaa2_devname(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..852fcfc4dd 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -808,7 +808,8 @@ fslmc_vfio_process_group(void)
 	bool is_dpmcp_in_blocklist = false, is_dpio_in_blocklist = false;
 	int dpmcp_count = 0, dpio_count = 0, current_device;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			dpmcp_count++;
 			if (dev->device.devargs &&
@@ -825,7 +826,8 @@ fslmc_vfio_process_group(void)
 
 	/* Search the MCP as that should be initialized first. */
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			current_device++;
 			if (dev->device.devargs &&
@@ -872,7 +874,8 @@ fslmc_vfio_process_group(void)
 	}
 
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_IO)
 			current_device++;
 		if (dev->device.devargs &&
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..a85e90d384 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -28,9 +28,9 @@ struct rte_afu_device;
 struct rte_afu_driver;
 
 /** Double linked list of Intel FPGA AFU device. */
-TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
+RTE_TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
 /** Double linked list of Intel FPGA AFU device drivers. */
-TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
+RTE_TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
 
 #define IFPGA_BUS_BITSTREAM_PATH_MAX_LEN 256
 
@@ -71,7 +71,7 @@ struct rte_afu_shared {
  * A structure describing a AFU device.
  */
 struct rte_afu_device {
-	TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
+	RTE_TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
 	struct rte_device device;               /**< Inherit core device */
 	struct rte_rawdev *rawdev;    /**< Point Rawdev */
 	struct rte_afu_id id;                   /**< AFU id within FPGA. */
@@ -105,7 +105,7 @@ typedef int (afu_remove_t)(struct rte_afu_device *);
  * A structure describing a AFU device.
  */
 struct rte_afu_driver {
-	TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
+	RTE_TAILQ_ENTRY(rte_afu_driver) next;   /**< Next afu driver. */
 	struct rte_driver driver;               /**< Inherit core driver. */
 	afu_probe_t *probe;                     /**< Device Probe function. */
 	afu_remove_t *remove;                   /**< Device Remove function. */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 3192e9c967..717388753d 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -2,6 +2,8 @@
  * Copyright 2018 Gaëtan Rivet
  */
 
+#include <sys/queue.h>
+
 #include <rte_bus.h>
 #include <rte_bus_pci.h>
 #include <rte_dev.h>
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..673a2850c1 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -37,16 +36,16 @@ struct rte_pci_device;
 struct rte_pci_driver;
 
 /** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+RTE_TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
 /** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+RTE_TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
 
 /* PCI Bus iterators */
 #define FOREACH_DEVICE_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
 
 struct rte_devargs;
 
@@ -64,7 +63,7 @@ enum rte_pci_kernel_driver {
  * A structure describing a PCI device.
  */
 struct rte_pci_device {
-	TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
+	RTE_TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
 	struct rte_device device;           /**< Inherit core device */
 	struct rte_pci_addr addr;           /**< PCI location. */
 	struct rte_pci_id id;               /**< PCI ID. */
@@ -160,7 +159,7 @@ typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
  * A structure describing a PCI driver.
  */
 struct rte_pci_driver {
-	TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
 	struct rte_driver driver;          /**< Inherit core driver. */
 	struct rte_pci_bus *bus;           /**< PCI bus reference. */
 	rte_pci_probe_t *probe;            /**< Device probe function. */
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index d39a7748b8..d7bd5d6e80 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -1,6 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright 2020 Mellanox Technologies, Ltd
  */
+
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/pci/windows/pci_netuio.c b/drivers/bus/pci/windows/pci_netuio.c
index 1bf9133f71..a0b175a8fc 100644
--- a/drivers/bus/pci/windows/pci_netuio.c
+++ b/drivers/bus/pci/windows/pci_netuio.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2020 Intel Corporation.
  */
 
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
index fc315d10fa..2856799953 100644
--- a/drivers/bus/vdev/rte_bus_vdev.h
+++ b/drivers/bus/vdev/rte_bus_vdev.h
@@ -15,12 +15,11 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <rte_dev.h>
 #include <rte_devargs.h>
 
 struct rte_vdev_device {
-	TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
+	RTE_TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
 	struct rte_device device;               /**< Inherit core device */
 };
 
@@ -53,7 +52,7 @@ rte_vdev_device_args(const struct rte_vdev_device *dev)
 }
 
 /** Double linked list of virtual device drivers. */
-TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
+RTE_TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
 
 /**
  * Probe function called for each virtual device driver once.
@@ -107,7 +106,7 @@ typedef int (rte_vdev_dma_unmap_t)(struct rte_vdev_device *dev, void *addr,
  * A virtual device driver abstraction.
  */
 struct rte_vdev_driver {
-	TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
 	struct rte_driver driver;        /**< Inherited general driver. */
 	rte_vdev_probe_t *probe;         /**< Virtual device probe function. */
 	rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 281a2c34e8..a8d8b2327e 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -100,7 +100,8 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 	struct vdev_custom_scan *custom_scan, *tmp_scan;
 
 	rte_spinlock_lock(&vdev_custom_scan_lock);
-	TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next, tmp_scan) {
+	RTE_TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next,
+				tmp_scan) {
 		if (custom_scan->callback != callback ||
 				(custom_scan->user_arg != (void *)-1 &&
 				custom_scan->user_arg != user_arg))
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..6bcff66468 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -20,7 +20,6 @@ extern "C" {
 #include <limits.h>
 #include <stdbool.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -38,15 +37,15 @@ struct rte_vmbus_bus;
 struct vmbus_channel;
 struct vmbus_mon_page;
 
-TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
-TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
+RTE_TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
+RTE_TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
 
 /* VMBus iterators */
 #define FOREACH_DEVICE_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
 
 /** Maximum number of VMBUS resources. */
 enum hv_uio_map {
@@ -62,7 +61,7 @@ enum hv_uio_map {
  * A structure describing a VMBUS device.
  */
 struct rte_vmbus_device {
-	TAILQ_ENTRY(rte_vmbus_device) next;    /**< Next probed VMBUS device */
+	RTE_TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
 	const struct rte_vmbus_driver *driver; /**< Associated driver */
 	struct rte_device device;              /**< Inherit core device */
 	rte_uuid_t device_id;		       /**< VMBUS device id */
@@ -93,7 +92,7 @@ typedef int (vmbus_remove_t)(struct rte_vmbus_device *);
  * A structure describing a VMBUS driver.
  */
 struct rte_vmbus_driver {
-	TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
 	struct rte_driver driver;
 	struct rte_vmbus_bus *bus;          /**< VM bus reference. */
 	vmbus_probe_t *probe;               /**< Device Probe function. */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index dbf85e4eda..ac86b70caf 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -2018,7 +2018,7 @@ bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx)
 	struct ulp_context_list_entry	*entry, *temp;
 
 	rte_spinlock_lock(&bnxt_ulp_ctxt_lock);
-	TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
 		if (entry->ulp_ctx == ulp_ctx) {
 			TAILQ_REMOVE(&ulp_cntx_list, entry, next);
 			rte_free(entry);
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 417f76bf60..65b77faae7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -157,7 +157,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	/* Destroy all bond flows from its slaves instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
-	TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
 		lret = bond_flow_destroy(dev, flow, err);
 		if (unlikely(lret != 0))
 			ret = lret;
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index 5e2b5f7c67..354f9fec20 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -180,7 +180,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
 			return ret;
 		}
 	}
-	TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
 		TAILQ_REMOVE(&PRIV(dev)->flow_list, flow, next);
 		fs_flow_release(&flow);
 	}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..6590363556 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5436,7 +5436,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* VSI has child to attach, release child first */
 	if (vsi->veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5444,7 +5444,8 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 	}
 
 	if (vsi->floating_veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head,
+			list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5452,7 +5453,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* Remove all macvlan filters of the VSI */
 	i40e_vsi_remove_all_macvlan_filter(vsi);
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		rte_free(f);
 
 	if (vsi->type != I40E_VSI_MAIN &&
@@ -6055,7 +6056,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
 	i = 0;
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		mac_filter[i] = f->mac_info;
 		ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr);
 		if (ret) {
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60..374b73e4a7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -6,6 +6,7 @@
 #define _I40E_ETHDEV_H_
 
 #include <stdint.h>
+#include <sys/queue.h>
 
 #include <rte_time.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c..e41a84f1d7 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4917,7 +4917,7 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
 		}
 
 		/* Delete FDIR flows in flow list. */
-		TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 			if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
 				TAILQ_REMOVE(&pf->flow_list, flow, node);
 			}
@@ -4972,7 +4972,7 @@ i40e_flow_flush_ethertype_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete ethertype flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_ETHERTYPE) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
@@ -5000,7 +5000,7 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete tunnel flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_TUNNEL) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfc..6579b1a00b 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -1366,7 +1366,7 @@ i40e_hash_filter_flush(struct i40e_pf *pf)
 {
 	struct rte_flow *flow, *next;
 
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
 		if (flow->filter_type != RTE_ETH_FILTER_HASH)
 			continue;
 
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index 2e34140c5b..ec24046440 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -216,7 +216,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* remove all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		vlan_num = vsi->vlan_num;
 		filter_type = f->mac_info.filter_type;
 		if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -274,7 +274,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* restore all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		if (f->mac_info.filter_type == I40E_MACVLAN_PERFECT_MATCH ||
 		    f->mac_info.filter_type == I40E_MACVLAN_HASH_MATCH) {
 			/**
@@ -563,7 +563,7 @@ rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
 	rte_ether_addr_copy(mac_addr, &vf->mac_addr);
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		if (i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr)
 				!= I40E_SUCCESS)
 			PMD_DRV_LOG(WARNING, "Delete MAC failed");
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 1fe270fb22..b86d99e57d 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -1637,7 +1637,7 @@ iavf_flow_init(struct iavf_adapter *ad)
 	TAILQ_INIT(&vf->dist_parser_list);
 	rte_spinlock_init(&vf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 				     engine->type);
@@ -1663,7 +1663,7 @@ iavf_flow_uninit(struct iavf_adapter *ad)
 	struct iavf_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1733,7 +1733,7 @@ iavf_unregister_parser(struct iavf_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -1917,7 +1917,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -1946,7 +1946,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2089,7 +2089,7 @@ iavf_flow_is_valid(struct rte_flow *flow)
 	void *temp;
 
 	if (flow && flow->engine) {
-		TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 			if (engine == flow->engine)
 				return true;
 		}
@@ -2142,7 +2142,7 @@ iavf_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
 		ret = iavf_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..629e88980d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -4,6 +4,7 @@
 
 #include <errno.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 #include <sys/types.h>
 #include <unistd.h>
 
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..fadd5f2e5a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1104,7 +1104,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (!vsi || !vsi->mac_num)
 		return -EINVAL;
 
-	TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
 		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
@@ -1115,7 +1115,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (vsi->vlan_num == 0)
 		return 0;
 
-	TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
 		ret = ice_remove_vlan_filter(vsi, &v_f->vlan_info.vlan);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 66b5743abf..3e557efe0c 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1820,7 +1820,7 @@ ice_flow_init(struct ice_adapter *ad)
 	TAILQ_INIT(&pf->dist_parser_list);
 	rte_spinlock_init(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 					engine->type);
@@ -1846,7 +1846,7 @@ ice_flow_uninit(struct ice_adapter *ad)
 	struct ice_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1946,7 +1946,7 @@ ice_unregister_parser(struct ice_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -2272,7 +2272,7 @@ ice_parse_engine_create(struct ice_adapter *ad,
 	void *meta = NULL;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		int ret;
 
 		if (parser_node->parser->parse_pattern_action(ad,
@@ -2305,7 +2305,7 @@ ice_parse_engine_validate(struct ice_adapter *ad,
 	struct ice_flow_parser_node *parser_node;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2477,7 +2477,7 @@ ice_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		ret = ice_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
@@ -2541,7 +2541,7 @@ ice_flow_redirect(struct ice_adapter *ad,
 
 	rte_spinlock_lock(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		if (!p_flow->engine->redirect)
 			continue;
 		ret = p_flow->engine->redirect(ad, p_flow, rd);
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index c702e19ea5..f5867ca055 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -1231,7 +1231,7 @@ ipn3ke_flow_flush(struct rte_eth_dev *dev,
 	struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev);
 	struct rte_flow *flow, *temp;
 
-	TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
 		TAILQ_REMOVE(&hw->flow_list, flow, next);
 		rte_free(flow);
 	}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 31d857030f..ba2bf4de37 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -15099,7 +15099,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev,
 		    policy->act_cnt[i].fate_action == MLX5_FLOW_FATE_MTR)
 			next_fm = mlx5_flow_meter_find(priv,
 					policy->act_cnt[i].next_mtr_id, NULL);
-		TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
+		RTE_TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
 				   next_port, tmp) {
 			claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule));
 			tbl = container_of(color_rule->matcher->tbl,
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index a24bd9c7ae..ba4e9fca17 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -2168,7 +2168,7 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
 			priv->mtr_idx_tbl = NULL;
 		}
 	} else {
-		TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
+		RTE_TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
 			fm = &legacy_fm->fm;
 			if (mlx5_flow_meter_params_flush(dev, fm, 0))
 				return -rte_mtr_error_set(error, EINVAL,
diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c
index 27eaf380cd..7d054c38d2 100644
--- a/drivers/net/softnic/rte_eth_softnic_flow.c
+++ b/drivers/net/softnic/rte_eth_softnic_flow.c
@@ -2207,7 +2207,8 @@ pmd_flow_flush(struct rte_eth_dev *dev,
 			void *temp;
 			int status;
 
-			TAILQ_FOREACH_SAFE(flow, &table->flows, node, temp) {
+			RTE_TAILQ_FOREACH_SAFE(flow, &table->flows, node,
+				temp) {
 				/* Rule delete. */
 				status = softnic_pipeline_table_rule_delete
 						(softnic,
diff --git a/drivers/net/softnic/rte_eth_softnic_swq.c b/drivers/net/softnic/rte_eth_softnic_swq.c
index 2083d0a976..afe6f05e29 100644
--- a/drivers/net/softnic/rte_eth_softnic_swq.c
+++ b/drivers/net/softnic/rte_eth_softnic_swq.c
@@ -39,7 +39,7 @@ softnic_softnic_swq_free_keep_rxq_txq(struct pmd_internals *p)
 {
 	struct softnic_swq *swq, *tswq;
 
-	TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
+	RTE_TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
 		if ((strncmp(swq->name, "RXQ", strlen("RXQ")) == 0) ||
 			(strncmp(swq->name, "TXQ", strlen("TXQ")) == 0))
 			continue;
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..7b80370b36 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1606,7 +1606,7 @@ remove_hw_queues_from_list(struct dpaa2_dpdmai_dev *dpdmai_dev)
 
 	DPAA2_QDMA_FUNC_TRACE();
 
-	TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
+	RTE_TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
 		if (queue->dpdmai_dev == dpdmai_dev) {
 			TAILQ_REMOVE(&qdma_queue_list, queue, next);
 			rte_free(queue);
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 7017124414..3ebf62e697 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -434,7 +434,7 @@ struct rte_bbdev_callback;
 struct rte_intr_handle;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+RTE_TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
 
 /**
  * @internal The data structure associated with a device. Drivers can access
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..f86bf2260b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -879,7 +879,7 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 struct rte_cryptodev_callback;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+RTE_TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
 
 /**
  * Structure used to hold information about the callbacks to be called for a
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..9542cbf263 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -66,7 +66,7 @@ struct rte_cryptodev_global {
 
 /* Cryptodev driver, containing the driver ID */
 struct cryptodev_driver {
-	TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
 	const struct rte_driver *driver;
 	uint8_t id;
 };
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index 23aaf8b7e4..2e2f35c47e 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -291,7 +291,7 @@ rte_devargs_insert(struct rte_devargs **da)
 	if (*da == NULL || (*da)->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
 		if (listed_da == *da)
 			/* devargs already in the list */
 			return 0;
@@ -358,7 +358,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
 	if (devargs == NULL || devargs->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
 		if (strcmp(d->bus->name, devargs->bus->name) == 0 &&
 		    strcmp(d->name, devargs->name) == 0) {
 			TAILQ_REMOVE(&devargs_list, d, next);
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index ec8fe23a7f..1be35f5397 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -10,6 +10,7 @@
 #include <errno.h>
 #include <regex.h>
 #include <fnmatch.h>
+#include <sys/queue.h>
 
 #include <rte_eal.h>
 #include <rte_log.h>
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index ff5861b5f3..24f5ceaab0 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -283,7 +283,7 @@ eal_option_device_parse(void)
 	void *tmp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
 		if (ret == 0) {
 			ret = rte_devargs_add(devopt->type, devopt->arg);
 			if (ret)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 64cf4e81c8..86dab1f057 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -8,6 +8,7 @@
 #include <stdbool.h>
 #include <stdint.h>
 #include <stdio.h>
+#include <sys/queue.h>
 
 #include <rte_dev.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
index 627f0483ab..9d8a69008c 100644
--- a/lib/eal/freebsd/include/rte_os.h
+++ b/lib/eal/freebsd/include/rte_os.h
@@ -11,6 +11,16 @@
  */
 
 #include <pthread_np.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
 
 typedef cpuset_t rte_cpuset_t;
 #define RTE_HAS_CPUSET
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index 80b154fb98..84d364df3f 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -19,13 +19,12 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_log.h>
 #include <rte_dev.h>
 
 /** Double linked list of buses */
-TAILQ_HEAD(rte_bus_list, rte_bus);
+RTE_TAILQ_HEAD(rte_bus_list, rte_bus);
 
 
 /**
@@ -250,7 +249,7 @@ typedef enum rte_iova_mode (*rte_bus_get_iommu_class_t)(void);
  * A structure describing a generic bus.
  */
 struct rte_bus {
-	TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
+	RTE_TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
 	const char *name;            /**< Name of the bus */
 	rte_bus_scan_t scan;         /**< Scan for devices attached to bus */
 	rte_bus_probe_t probe;       /**< Probe devices on bus */
diff --git a/lib/eal/include/rte_class.h b/lib/eal/include/rte_class.h
index 856d09b22d..d560339652 100644
--- a/lib/eal/include/rte_class.h
+++ b/lib/eal/include/rte_class.h
@@ -22,18 +22,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
-
 #include <rte_dev.h>
 
 /** Double linked list of classes */
-TAILQ_HEAD(rte_class_list, rte_class);
+RTE_TAILQ_HEAD(rte_class_list, rte_class);
 
 /**
  * A structure describing a generic device class.
  */
 struct rte_class {
-	TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
+	RTE_TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
 	const char *name; /**< Name of the class */
 	rte_dev_iterate_t dev_iterate; /**< Device iterator. */
 };
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6dd72c11a1..f6efe0c94e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -18,7 +18,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_compat.h>
@@ -75,7 +74,7 @@ struct rte_mem_resource {
  * A structure describing a device driver.
  */
 struct rte_driver {
-	TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
 	const char *name;                   /**< Driver name. */
 	const char *alias;              /**< Driver alias. */
 };
@@ -90,7 +89,7 @@ struct rte_driver {
  * A structure describing a generic device.
  */
 struct rte_device {
-	TAILQ_ENTRY(rte_device) next; /**< Next device */
+	RTE_TAILQ_ENTRY(rte_device) next; /**< Next device */
 	const char *name;             /**< Device name */
 	const struct rte_driver *driver; /**< Driver assigned after probing */
 	const struct rte_bus *bus;    /**< Bus handle assigned on scan */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index cd90944fe8..957477b398 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -21,7 +21,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 #include <rte_compat.h>
 #include <rte_bus.h>
 
@@ -76,7 +75,7 @@ enum rte_devtype {
  */
 struct rte_devargs {
 	/** Next in list. */
-	TAILQ_ENTRY(rte_devargs) next;
+	RTE_TAILQ_ENTRY(rte_devargs) next;
 	/** Type of device. */
 	enum rte_devtype type;
 	/** Device policy. */
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index b706bb8710..bb3523467b 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -21,7 +21,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdarg.h>
 #include <stdbool.h>
-#include <sys/queue.h>
 
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/eal/include/rte_service.h b/lib/eal/include/rte_service.h
index c7d037d862..1c9275c32a 100644
--- a/lib/eal/include/rte_service.h
+++ b/lib/eal/include/rte_service.h
@@ -29,7 +29,6 @@ extern "C" {
 
 #include<stdio.h>
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index b6fe4e5f78..e860582cda 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -15,17 +15,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <stdio.h>
 #include <rte_debug.h>
 
 /** dummy structure type used by the rte_tailq APIs */
 struct rte_tailq_entry {
-	TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
+	RTE_TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
 	void *data; /**< Pointer to the data referenced by this tailq entry */
 };
 /** dummy */
-TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
+RTE_TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
 
 #define RTE_TAILQ_NAMESIZE 32
 
@@ -48,7 +47,7 @@ struct rte_tailq_elem {
 	 * rte_eal_tailqs_init()
 	 */
 	struct rte_tailq_head *head;
-	TAILQ_ENTRY(rte_tailq_elem) next;
+	RTE_TAILQ_ENTRY(rte_tailq_elem) next;
 	const char name[RTE_TAILQ_NAMESIZE];
 };
 
@@ -126,10 +125,10 @@ RTE_INIT(tailqinitfn_ ##t) \
 }
 
 /* This macro permits both remove and free var within the loop safely.*/
-#ifndef TAILQ_FOREACH_SAFE
-#define TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
-	for ((var) = TAILQ_FIRST((head));			\
-	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1);	\
+#ifndef RTE_TAILQ_FOREACH_SAFE
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = RTE_TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = RTE_TAILQ_NEXT((var), field), 1); \
 	    (var) = (tvar))
 #endif
 
diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
index 1618b4df22..35c07c70cb 100644
--- a/lib/eal/linux/include/rte_os.h
+++ b/lib/eal/linux/include/rte_os.h
@@ -11,6 +11,16 @@
  */
 
 #include <sched.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
 
 #ifdef CPU_SETSIZE /* may require _GNU_SOURCE */
 typedef cpu_set_t rte_cpuset_t;
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index e5dc54efb8..103c1f909d 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -4,6 +4,7 @@
 
 #include <stdatomic.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 
 #include <rte_alarm.h>
 #include <rte_spinlock.h>
diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
index 66c711d458..a0a311495e 100644
--- a/lib/eal/windows/include/rte_os.h
+++ b/lib/eal/windows/include/rte_os.h
@@ -18,6 +18,33 @@
 extern "C" {
 #endif
 
+/* These macros are compatible with bundled sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) \
+struct name { \
+	struct type *tqh_first; \
+	struct type **tqh_last; \
+}
+#define RTE_TAILQ_ENTRY(type) \
+struct { \
+	struct type *tqe_next; \
+	struct type **tqe_prev; \
+}
+#define RTE_TAILQ_FOREACH(var, head, field) \
+	for ((var) = RTE_TAILQ_FIRST((head)); \
+	    (var); \
+	    (var) = RTE_TAILQ_NEXT((var), field))
+#define RTE_TAILQ_FIRST(head) ((head)->tqh_first)
+#define RTE_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next)
+#define RTE_STAILQ_HEAD(name, type) \
+struct name { \
+	struct type *stqh_first; \
+	struct type **stqh_last; \
+}
+#define RTE_STAILQ_ENTRY(type) \
+struct { \
+	struct type *stqe_next; \
+}
+
 /* cpu_set macros implementation */
 #define RTE_CPU_AND(dst, src1, src2) CPU_AND(dst, src1, src2)
 #define RTE_CPU_OR(dst, src1, src2) CPU_OR(dst, src1, src2)
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 77f46809f8..5bf517fee9 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -759,7 +759,7 @@ rte_efd_free(struct rte_efd_table *table)
 	efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list);
 	rte_mcfg_tailq_write_lock();
 
-	TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
 		if (te->data == (void *) table) {
 			TAILQ_REMOVE(efd_list, te, next);
 			rte_free(te);
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..d2c9ec42c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -21,7 +21,7 @@
 
 struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
+RTE_TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
 struct rte_eth_dev;
 
diff --git a/lib/hash/rte_fbk_hash.h b/lib/hash/rte_fbk_hash.h
index c4d6976d2b..9c3a61c1d6 100644
--- a/lib/hash/rte_fbk_hash.h
+++ b/lib/hash/rte_fbk_hash.h
@@ -17,7 +17,6 @@
 
 #include <stdint.h>
 #include <errno.h>
-#include <sys/queue.h>
 
 #ifdef __cplusplus
 extern "C" {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index d5a95a6e00..696a1121e2 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2021 Intel Corporation
  */
 
+#include <sys/queue.h>
+
 #include <rte_thash.h>
 #include <rte_tailq.h>
 #include <rte_random.h>
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index 0bfe64b14e..80f931c32a 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -62,7 +62,7 @@ struct ip_frag_key {
  * First two entries in the frags[] array are for the last and first fragments.
  */
 struct ip_frag_pkt {
-	TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
+	RTE_TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
 	struct ip_frag_key key;           /**< fragmentation key */
 	uint64_t             start;       /**< creation timestamp */
 	uint32_t             total_size;  /**< expected reassembled size */
@@ -83,7 +83,7 @@ struct rte_ip_frag_death_row {
 	/**< mbufs to be freed */
 };
 
-TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
+RTE_TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
 
 /** fragmentation table statistics */
 struct ip_frag_tbl_stat {
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 59a588425b..c5f859ae71 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1337,7 +1337,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 
 	rte_mcfg_mempool_read_lock();
 
-	TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
+	RTE_TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
 		(*func)((struct rte_mempool *) te->data, arg);
 	}
 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4235d6f0bf..f57ecbd6fc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -38,7 +38,6 @@
 #include <stdint.h>
 #include <errno.h>
 #include <inttypes.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_spinlock.h>
@@ -141,7 +140,7 @@ struct rte_mempool_objsz {
  * double-frees.
  */
 struct rte_mempool_objhdr {
-	STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;          /**< The mempool owning the object. */
 	rte_iova_t iova;                 /**< IO address of the object. */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -152,7 +151,7 @@ struct rte_mempool_objhdr {
 /**
  * A list of object headers type
  */
-STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
+RTE_STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 
@@ -171,7 +170,7 @@ struct rte_mempool_objtlr {
 /**
  * A list of memory where objects are stored
  */
-STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
+RTE_STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
 
 /**
  * Callback used to free a memory chunk
@@ -186,7 +185,7 @@ typedef void (rte_mempool_memchunk_free_cb_t)(struct rte_mempool_memhdr *memhdr,
  * and physically contiguous.
  */
 struct rte_mempool_memhdr {
-	STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;  /**< The mempool owning the chunk */
 	void *addr;              /**< Virtual address of the chunk */
 	rte_iova_t iova;         /**< IO address of the chunk */
diff --git a/lib/pci/rte_pci.h b/lib/pci/rte_pci.h
index 1f33d687f4..71cbd441c7 100644
--- a/lib/pci/rte_pci.h
+++ b/lib/pci/rte_pci.h
@@ -18,7 +18,6 @@ extern "C" {
 
 #include <stdio.h>
 #include <limits.h>
-#include <sys/queue.h>
 #include <inttypes.h>
 #include <sys/types.h>
 
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 16718ca7f1..43ce1a29d4 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -26,7 +26,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdint.h>
 #include <string.h>
-#include <sys/queue.h>
 #include <errno.h>
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h
index e23f2304c6..f93e5f3f95 100644
--- a/lib/table/rte_swx_table.h
+++ b/lib/table/rte_swx_table.h
@@ -16,7 +16,8 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
+
+#include <rte_os.h>
 
 /** Match type. */
 enum rte_swx_table_match_type {
@@ -68,7 +69,7 @@ struct rte_swx_table_entry {
 	/** Used to facilitate the membership of this table entry to a
 	 * linked list.
 	 */
-	TAILQ_ENTRY(rte_swx_table_entry) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_entry) node;
 
 	/** Key value for the current entry. Array of *key_size* bytes or NULL
 	 * if the *key_size* for the current table is 0.
@@ -111,7 +112,7 @@ struct rte_swx_table_entry {
 };
 
 /** List of table entries. */
-TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
+RTE_TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
 
 /**
  * Table memory footprint get
diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h
index 71b6a74810..62988d2856 100644
--- a/lib/table/rte_swx_table_selector.h
+++ b/lib/table/rte_swx_table_selector.h
@@ -16,7 +16,6 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_compat.h>
 
@@ -56,7 +55,7 @@ struct rte_swx_table_selector_params {
 /** Group member parameters. */
 struct rte_swx_table_selector_member {
 	/** Linked list connectivity. */
-	TAILQ_ENTRY(rte_swx_table_selector_member) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_selector_member) node;
 
 	/** Member ID. */
 	uint32_t member_id;
@@ -66,7 +65,7 @@ struct rte_swx_table_selector_member {
 };
 
 /** List of group members. */
-TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
+RTE_TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
 
 /** Group parameters. */
 struct rte_swx_table_selector_group {
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e0b67721b6..e4a445e709 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -32,7 +32,7 @@ vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_pending_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -100,7 +100,8 @@ vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next,
+				temp_node) {
 		if (node->iova < iova)
 			continue;
 		if (node->iova >= iova + size)
@@ -121,7 +122,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -141,7 +142,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
 
 	entry_idx = rte_rand() % vq->iotlb_cache_nr;
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		if (!entry_idx) {
 			TAILQ_REMOVE(&vq->iotlb_list, node, next);
 			rte_mempool_put(vq->iotlb_pool, node);
@@ -218,7 +219,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		/* Sorted list */
 		if (unlikely(iova + size < node->iova))
 			break;
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h
index bfada387b0..b0f494815f 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/rte_vdpa_dev.h
@@ -71,7 +71,7 @@ struct rte_vdpa_dev_ops {
  * vdpa device structure includes device address and device operations.
  */
 struct rte_vdpa_device {
-	TAILQ_ENTRY(rte_vdpa_device) next;
+	RTE_TAILQ_ENTRY(rte_vdpa_device) next;
 	/** Generic device information */
 	struct rte_device *device;
 	/** vdpa device operations */
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 99a926a772..6dd91859ac 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -115,7 +115,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 	int ret = -1;
 
 	rte_spinlock_lock(&vdpa_device_list_lock);
-	TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
+	RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
 		if (dev != cur_dev)
 			continue;
 
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [RFC 04/15] eventdev: move inline APIs into separate structure
  @ 2021-08-23 19:40  2% ` pbhagavatula
    1 sibling, 0 replies; 200+ results
From: pbhagavatula @ 2021-08-23 19:40 UTC (permalink / raw)
  To: jerinj, Ray Kinsella; +Cc: konstantin.ananyev, dev, Pavan Nikhilesh

From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Move fastpath inline function pointers from rte_eventdev into a
separate structure accessed via a flat array.
The intension is to make rte_eventdev and related structures private
to avoid future API/ABI breakages.`

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 lib/eventdev/eventdev_pmd.h      |  10 ++++
 lib/eventdev/eventdev_private.c  | 100 +++++++++++++++++++++++++++++++
 lib/eventdev/meson.build         |   1 +
 lib/eventdev/rte_eventdev.c      |  25 +++++++-
 lib/eventdev/rte_eventdev_core.h |  44 ++++++++++++++
 lib/eventdev/version.map         |   4 ++
 6 files changed, 183 insertions(+), 1 deletion(-)
 create mode 100644 lib/eventdev/eventdev_private.c

diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h
index a25d3f1fb5..5eaa29fe14 100644
--- a/lib/eventdev/eventdev_pmd.h
+++ b/lib/eventdev/eventdev_pmd.h
@@ -1193,6 +1193,16 @@ __rte_internal
 int
 rte_event_pmd_release(struct rte_eventdev *eventdev);
 
+/**
+ * Reset eventdevice fastpath APIs to dummy values.
+ *
+ * @param rba
+ * The *api* pointer to reset.
+ */
+__rte_internal
+void
+rte_event_dev_api_reset(struct rte_eventdev_api *api);
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c
new file mode 100644
index 0000000000..c60fd2b522
--- /dev/null
+++ b/lib/eventdev/eventdev_private.c
@@ -0,0 +1,100 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(C) 2021 Marvell.
+ */
+
+#include "eventdev_pmd.h"
+#include "rte_eventdev.h"
+
+static uint16_t
+dummy_event_enqueue(__rte_unused uint8_t dev_id, __rte_unused uint8_t port_id,
+		    __rte_unused const struct rte_event *ev)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_enqueue_burst(__rte_unused uint8_t dev_id,
+			  __rte_unused uint8_t port_id,
+			  __rte_unused const struct rte_event ev[],
+			  __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event enqueue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue(__rte_unused uint8_t dev_id, __rte_unused uint8_t port_id,
+		    __rte_unused struct rte_event *ev,
+		    __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_dequeue_burst(__rte_unused uint8_t dev_id,
+			  __rte_unused uint8_t port_id,
+			  __rte_unused struct rte_event ev[],
+			  __rte_unused uint16_t nb_events,
+			  __rte_unused uint64_t timeout_ticks)
+{
+	RTE_EDEV_LOG_ERR(
+		"event dequeue burst requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue(__rte_unused uint8_t dev_id,
+			       __rte_unused uint8_t port_id,
+			       __rte_unused struct rte_event ev[],
+			       __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_tx_adapter_enqueue_same_dest(__rte_unused uint8_t dev_id,
+					 __rte_unused uint8_t port_id,
+					 __rte_unused struct rte_event ev[],
+					 __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event Tx adapter enqueue same destination requested for unconfigured event device");
+	return 0;
+}
+
+static uint16_t
+dummy_event_crypto_adapter_enqueue(__rte_unused uint8_t dev_id,
+				   __rte_unused uint8_t port_id,
+				   __rte_unused struct rte_event ev[],
+				   __rte_unused uint16_t nb_events)
+{
+	RTE_EDEV_LOG_ERR(
+		"event crypto adapter enqueue requested for unconfigured event device");
+	return 0;
+}
+
+void
+rte_event_dev_api_reset(struct rte_eventdev_api *api)
+{
+	static const struct rte_eventdev_api dummy = {
+		.enqueue = dummy_event_enqueue,
+		.enqueue_burst = dummy_event_enqueue_burst,
+		.enqueue_new_burst = dummy_event_enqueue_burst,
+		.enqueue_forward_burst = dummy_event_enqueue_burst,
+		.dequeue = dummy_event_dequeue,
+		.dequeue_burst = dummy_event_dequeue_burst,
+		.txa_enqueue = dummy_event_tx_adapter_enqueue,
+		.txa_enqueue_same_dest =
+			dummy_event_tx_adapter_enqueue_same_dest,
+		.ca_enqueue = dummy_event_crypto_adapter_enqueue,
+	};
+
+	*api = dummy;
+}
diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build
index 8b51fde361..9051ff04b7 100644
--- a/lib/eventdev/meson.build
+++ b/lib/eventdev/meson.build
@@ -8,6 +8,7 @@ else
 endif
 
 sources = files(
+        'eventdev_private.c',
         'rte_eventdev.c',
         'rte_event_ring.c',
         'eventdev_trace_points.c',
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 21c5c55086..5ff8596788 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -44,6 +44,9 @@ static struct rte_eventdev_global eventdev_globals = {
 	.nb_devs		= 0
 };
 
+/* Public fastpath APIs. */
+struct rte_eventdev_api *rte_eventdev_api;
+
 /* Event dev north bound API implementation */
 
 uint8_t
@@ -394,8 +397,9 @@ int
 rte_event_dev_configure(uint8_t dev_id,
 			const struct rte_event_dev_config *dev_conf)
 {
-	struct rte_eventdev *dev;
 	struct rte_event_dev_info info;
+	struct rte_eventdev_api api;
+	struct rte_eventdev *dev;
 	int diag;
 
 	RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL);
@@ -564,10 +568,14 @@ rte_event_dev_configure(uint8_t dev_id,
 		return diag;
 	}
 
+	api = rte_eventdev_api[dev_id];
+	rte_event_dev_api_reset(&api);
+
 	/* Configure the device */
 	diag = (*dev->dev_ops->dev_configure)(dev);
 	if (diag != 0) {
 		RTE_EDEV_LOG_ERR("dev%d dev_configure = %d", dev_id, diag);
+		rte_event_dev_api_reset(&api);
 		rte_event_dev_queue_config(dev, 0);
 		rte_event_dev_port_config(dev, 0);
 	}
@@ -1396,6 +1404,7 @@ rte_event_dev_close(uint8_t dev_id)
 		return -EBUSY;
 	}
 
+	rte_event_dev_api_reset(&rte_eventdev_api[dev_id]);
 	rte_eventdev_trace_close(dev_id);
 	return (*dev->dev_ops->dev_close)(dev);
 }
@@ -1479,6 +1488,20 @@ rte_event_pmd_allocate(const char *name, int socket_id)
 		}
 	}
 
+	if (rte_eventdev_api == NULL) {
+		rte_eventdev_api = rte_zmalloc("Eventdev_api",
+					       sizeof(struct rte_eventdev_api) *
+						       RTE_EVENT_MAX_DEVS,
+					       RTE_CACHE_LINE_SIZE);
+		if (rte_eventdev_api == NULL) {
+			RTE_EDEV_LOG_ERR(
+				"Unable to allocate memory for fastpath eventdev API array");
+			return NULL;
+		}
+		for (dev_id = 0; dev_id < RTE_EVENT_MAX_DEVS; dev_id++)
+			rte_event_dev_api_reset(&rte_eventdev_api[dev_id]);
+	}
+
 	if (rte_event_pmd_get_named_dev(name) != NULL) {
 		RTE_EDEV_LOG_ERR("Event device with name %s already "
 				"allocated!", name);
diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h
index 97dfec1ae1..4a7edacb0e 100644
--- a/lib/eventdev/rte_eventdev_core.h
+++ b/lib/eventdev/rte_eventdev_core.h
@@ -12,23 +12,39 @@
 extern "C" {
 #endif
 
+typedef uint16_t (*rte_event_enqueue_t)(uint8_t dev_id, uint8_t port_id,
+					const struct rte_event *ev);
 typedef uint16_t (*event_enqueue_t)(void *port, const struct rte_event *ev);
 /**< @internal Enqueue event on port of a device */
 
+typedef uint16_t (*rte_event_enqueue_burst_t)(uint8_t dev_id, uint8_t port_id,
+					      const struct rte_event ev[],
+					      uint16_t nb_events);
 typedef uint16_t (*event_enqueue_burst_t)(void *port,
 					  const struct rte_event ev[],
 					  uint16_t nb_events);
 /**< @internal Enqueue burst of events on port of a device */
 
+typedef uint16_t (*rte_event_dequeue_t)(uint8_t dev_id, uint8_t port_id,
+					struct rte_event *ev,
+					uint64_t timeout_ticks);
 typedef uint16_t (*event_dequeue_t)(void *port, struct rte_event *ev,
 				    uint64_t timeout_ticks);
 /**< @internal Dequeue event from port of a device */
 
+typedef uint16_t (*rte_event_dequeue_burst_t)(uint8_t dev_id, uint8_t port_id,
+					      struct rte_event ev[],
+					      uint16_t nb_events,
+					      uint64_t timeout_ticks);
 typedef uint16_t (*event_dequeue_burst_t)(void *port, struct rte_event ev[],
 					  uint16_t nb_events,
 					  uint64_t timeout_ticks);
 /**< @internal Dequeue burst of events from port of a device */
 
+typedef uint16_t (*rte_event_tx_adapter_enqueue_t)(uint8_t dev_id,
+						   uint8_t port_id,
+						   struct rte_event ev[],
+						   uint16_t nb_events);
 typedef uint16_t (*event_tx_adapter_enqueue)(void *port, struct rte_event ev[],
 					     uint16_t nb_events);
 /**< @internal Enqueue burst of events on port of a device */
@@ -40,11 +56,39 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port,
  * burst having same destination Ethernet port & Tx queue.
  */
 
+typedef uint16_t (*rte_event_crypto_adapter_enqueue_t)(uint8_t dev_id,
+						       uint8_t port_id,
+						       struct rte_event ev[],
+						       uint16_t nb_events);
 typedef uint16_t (*event_crypto_adapter_enqueue)(void *port,
 						 struct rte_event ev[],
 						 uint16_t nb_events);
 /**< @internal Enqueue burst of events on crypto adapter */
 
+struct rte_eventdev_api {
+	rte_event_enqueue_t enqueue;
+	/**< PMD enqueue function. */
+	rte_event_enqueue_burst_t enqueue_burst;
+	/**< PMD enqueue burst function. */
+	rte_event_enqueue_burst_t enqueue_new_burst;
+	/**< PMD enqueue burst new function. */
+	rte_event_enqueue_burst_t enqueue_forward_burst;
+	/**< PMD enqueue burst fwd function. */
+	rte_event_dequeue_t dequeue;
+	/**< PMD dequeue function. */
+	rte_event_dequeue_burst_t dequeue_burst;
+	/**< PMD dequeue burst function. */
+	rte_event_tx_adapter_enqueue_t txa_enqueue;
+	/**< PMD Tx adapter enqueue function. */
+	rte_event_tx_adapter_enqueue_t txa_enqueue_same_dest;
+	/**< PMD Tx adapter enqueue same destination function. */
+	rte_event_crypto_adapter_enqueue_t ca_enqueue;
+	/**< PMD Crypto adapter enqueue function. */
+	uintptr_t reserved[2];
+} __rte_cache_aligned;
+
+extern struct rte_eventdev_api *rte_eventdev_api;
+
 #define RTE_EVENTDEV_NAME_MAX_LEN (64)
 /**< @internal Max length of name of event PMD */
 
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 5f1fe412a4..bc2912dcfd 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -85,6 +85,9 @@ DPDK_22 {
 	rte_event_timer_cancel_burst;
 	rte_eventdevs;
 
+	#added in 21.11
+	rte_eventdev_api;
+
 	local: *;
 };
 
@@ -141,6 +144,7 @@ EXPERIMENTAL {
 INTERNAL {
 	global:
 
+	rte_event_dev_api_reset;
 	rte_event_pmd_selftest_seqn_dynfield_offset;
 	rte_event_pmd_allocate;
 	rte_event_pmd_get_named_dev;
-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH] doc: abstract the behaviour of rte_ctrl_thread_create
  2021-08-23  9:40  3%     ` Olivier Matz
@ 2021-08-23 21:18  0%       ` Honnappa Nagarahalli
  0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2021-08-23 21:18 UTC (permalink / raw)
  To: Olivier Matz
  Cc: thomas, dev, lucp.at.work, david.marchand, Ruifeng Wang, nd, nd

<snip>

> > >
> > > 30/07/2021 23:44, Honnappa Nagarahalli:
> > > > The current expected behaviour of the function
> > > > rte_ctrl_thread_create is rigid which makes the implementation of the
> function complex.
> > > > Make the expected behaviour abstract to allow for simplified
> > > > implementation.
> > > >
> > > > With this change, the calls to pthread_setaffinity_np can be moved
> > > > to the control thread. This will avoid the use of
> > > > pthread_barrier_wait and simplify the synchronization mechanism
> > > > between rte_ctrl_thread_create and the calling thread.
> > > >
> > > > Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > ---
> > > > +* eal: The expected behaviour of the function
> > > > +``rte_ctrl_thread_create``
> > > > +  abstracted to allow for simplified implementation. The new
> > > > +behaviour is
> > > > +  as follows:
> > > > +  Creates a control thread with the given name. The affinity of
> > > > +the new
> > > > +  thread is based on the CPU affinity retrieved at the time
> > > > +rte_eal_init()
> > > > +  was called, the dataplane and service lcores are then excluded.
> > >
> > > I don't understand what is different of the current API:
> > >  * Wrapper to pthread_create(), pthread_setname_np() and
> > >  * pthread_setaffinity_np(). The affinity of the new thread is based
> > >  * on the CPU affinity retrieved at the time rte_eal_init() was
> > > called,
> > >  * the dataplane and service lcores are then excluded.
> > My concern is for the word "Wrapper". I am not sure how much we are
> bound by that to keep the code as a "wrapper".
> > The new patch does not change the high level behavior.
> 
> I am ok to remove the word "wrapper" from the description, and I agree it can
> be better described without quoting the pthread_* functions.
> 
> > Are you saying you are ok with the patch without the deprecation notice?
> 
> I don't think it requires a deprecation notice if the API and ABI is left
> unchanged. To be honnest, I find a bit hard to understand what is really
> changed by reading the deprecation notice:
Thanks Olivier. I agree, I was also not sure. The term "wrapper" made me feel that we are defining certain return codes to the application.

At the macro level, I think the expected behavior remains the same.

> 
> > +* eal: The expected behaviour of the function
> > +``rte_ctrl_thread_create``
> > +  abstracted to allow for simplified implementation. The new
> > +behaviour is
> > +  as follows:
> > +  Creates a control thread with the given name. The affinity of the
> > +new
> > +  thread is based on the CPU affinity retrieved at the time
> > +rte_eal_init()
> > +  was called, the dataplane and service lcores are then excluded.
> 
> I'll send my comments to your patch:
> http://patches.dpdk.org/project/dpdk/patch/20210802051652.3611-1-
> honnappa.nagarahalli@arm.com/
> 
> 
> Thanks,
> Olivier

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [dpdk-ci] [PATCH] version: 21.11-rc0
  2021-08-17 12:04  4%   ` [dpdk-dev] [dpdk-ci] " Lincoln Lavoie
  2021-08-17 15:19  0%     ` David Marchand
@ 2021-08-24  7:58  3%     ` David Marchand
  2021-08-24 12:19  3%       ` Lincoln Lavoie
  1 sibling, 1 reply; 200+ results
From: David Marchand @ 2021-08-24  7:58 UTC (permalink / raw)
  To: Lincoln Lavoie, dpdklab
  Cc: Thomas Monjalon, ci, dev, Ray Kinsella, Aaron Conole

Hello Lincoln,

On Tue, Aug 17, 2021 at 2:04 PM Lincoln Lavoie <lylavoie@iol.unh.edu> wrote:
>
> Hi David,
>
> ABI testing was disable / stopped on Friday in the Community CI lab.  Patches from before that for 21.11 would have still had the test run and could have failures listed. I'm not sure if there is a way to "remove" those failure marks from patchworks.  But, for all new patches since then, ABI hasn't been run.

I can see new reports for ABI, please can someone from UNH double check?
https://lab.dpdk.org/results/dashboard/patchsets/18301/
https://lab.dpdk.org/results/dashboard/patchsets/18303/
https://lab.dpdk.org/results/dashboard/patchsets/18325/


-- 
David Marchand


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [dpdk-ci] [PATCH] version: 21.11-rc0
  2021-08-24  7:58  3%     ` David Marchand
@ 2021-08-24 12:19  3%       ` Lincoln Lavoie
  0 siblings, 0 replies; 200+ results
From: Lincoln Lavoie @ 2021-08-24 12:19 UTC (permalink / raw)
  To: David Marchand
  Cc: dpdklab, Thomas Monjalon, ci, dev, Ray Kinsella, Aaron Conole,
	Owen Hilyard

Hi David,

We'll check.  Owen was working on a change to allow ABI to still run for
merges onto the 20.11 branch, so the LTS branches can remain stable, while
per-patch testing is off for the 21.11 release.  Looks like that has a side
effect for how these patches were flagged in the system.

Cheers,
Lincoln

On Tue, Aug 24, 2021 at 3:58 AM David Marchand <david.marchand@redhat.com>
wrote:

> Hello Lincoln,
>
> On Tue, Aug 17, 2021 at 2:04 PM Lincoln Lavoie <lylavoie@iol.unh.edu>
> wrote:
> >
> > Hi David,
> >
> > ABI testing was disable / stopped on Friday in the Community CI lab.
> Patches from before that for 21.11 would have still had the test run and
> could have failures listed. I'm not sure if there is a way to "remove"
> those failure marks from patchworks.  But, for all new patches since then,
> ABI hasn't been run.
>
> I can see new reports for ABI, please can someone from UNH double check?
> https://lab.dpdk.org/results/dashboard/patchsets/18301/
> https://lab.dpdk.org/results/dashboard/patchsets/18303/
> https://lab.dpdk.org/results/dashboard/patchsets/18325/
>
>
> --
> David Marchand
>
>

-- 
*Lincoln Lavoie*
Principal Engineer, Broadband Technologies
21 Madbury Rd., Ste. 100, Durham, NH 03824
lylavoie@iol.unh.edu
https://www.iol.unh.edu
+1-603-674-2755 (m)
<https://www.iol.unh.edu>

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [RFC 11/15] eventdev: reserve fields in timer object
  @ 2021-08-24 15:10  3%   ` Stephen Hemminger
  0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2021-08-24 15:10 UTC (permalink / raw)
  To: pbhagavatula; +Cc: jerinj, Erik Gabriel Carrillo, konstantin.ananyev, dev

On Tue, 24 Aug 2021 01:10:15 +0530
<pbhagavatula@marvell.com> wrote:

> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> Reserve fields in rte_event_timer data structure to address future
> use cases.
> Also, remove volatile from rte_event_timer.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>

Reserve fields are not a good idea. They don't solve future API/ABI problems.

The issue is that you need to zero them and check they are zero otherwise
they can't safely be used later.  This happened with the Linux kernel
system calls where in several cases a flag field was added for future.
The problem is that old programs would work with any garbage in the flag
field, and therefore the flag could not be extended.

A better way to make structures internal opaque objects that
can be resized.  Why is rte_event_timer_adapter exposed in API?

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v9] eal: remove sys/queue.h from public headers
  2021-08-23 13:03  1%           ` [dpdk-dev] [PATCH v8] " William Tu
@ 2021-08-24 16:21  1%             ` William Tu
  0 siblings, 0 replies; 200+ results
From: William Tu @ 2021-08-24 16:21 UTC (permalink / raw)
  To: dev; +Cc: Dmitry.Kozliuk, Nick Connolly

Currently there are some public headers that include 'sys/queue.h', which
is not POSIX, but usually provided by the Linux/BSD system library.
(Not in POSIX.1, POSIX.1-2001, or POSIX.1-2008. Present on the BSDs.)
The file is missing on Windows. During the Windows build, DPDK uses a
bundled copy, so building a DPDK library works fine.  But when OVS or other
applications use DPDK as a library, because some DPDK public headers
include 'sys/queue.h', on Windows, it triggers an error due to no such
file.

One solution is to install the 'lib/eal/windows/include/sys/queue.h' into
Windows environment, such as [1]. However, this means DPDK exports the
functionalities of 'sys/queue.h' into the environment, which might cause
symbols, macros, headers clashing with other applications.

The patch fixes it by removing the "#include <sys/queue.h>" from
DPDK public headers, so programs including DPDK headers don't depend
on the system to provide 'sys/queue.h'. When these public headers use
macros such as TAILQ_xxx, we replace it by the ones with RTE_ prefix.
For Windows, we copy the definitions from <sys/queue.h> to rte_os.h
in Windows EAL. Note that these RTE_ macros are compatible with
<sys/queue.h>, both at the level of API (to use with <sys/queue.h>
macros in C files) and ABI (to avoid breaking it).

Additionally, the TAILQ_FOREACH_SAFE is not part of <sys/queue.h>,
the patch replaces it with RTE_TAILQ_FOREACH_SAFE.

[1] http://mails.dpdk.org/archives/dev/2021-August/216304.html

Suggested-by: Nick Connolly <nick.connolly@mayadata.io>
Suggested-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Acked-by: Dmitry Kozliuk <Dmitry.Kozliuk@gmail.com>
Signed-off-by: William Tu <u9012063@gmail.com>
---
v8-v9:
* Acked by Dmitry Kozliuk
* remove #ifdef at RTE_TAILQ_FOREACH_SAFE
* remove period at title
v7-v8:
* remove duplicate RTE_TAILQ_FOREACH_SAFE at rte_os.h
  put the macro at rte_tailq.h
* remove inline comments
* diff
  https://github.com/williamtu/dpdk/compare/a4144ff11b..6cb7cd8daf
v6-v7:
* remove some redundant "#incldue <sys/queue.h>"
* remove extra newline, add comment at rte_os.h for windows
  use of bundled sys/queue

v5-v6:
* fix tab/indent issue, fix type and spelling
* fix duplicate RTE_TAILQ_FOREACH_SAFE
* fix build error due to drivers/net/mlx5/mlx5_flow_meter.c
---

 drivers/bus/auxiliary/private.h            |  1 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h  |  5 ++--
 drivers/bus/dpaa/dpaa_bus.c                |  4 ++--
 drivers/bus/fslmc/fslmc_bus.c              |  4 ++--
 drivers/bus/fslmc/fslmc_vfio.c             |  9 +++++---
 drivers/bus/ifpga/rte_bus_ifpga.h          |  8 +++----
 drivers/bus/pci/pci_params.c               |  2 ++
 drivers/bus/pci/rte_bus_pci.h              | 13 +++++------
 drivers/bus/pci/windows/pci.c              |  3 +++
 drivers/bus/pci/windows/pci_netuio.c       |  2 ++
 drivers/bus/vdev/rte_bus_vdev.h            |  7 +++---
 drivers/bus/vdev/vdev.c                    |  3 ++-
 drivers/bus/vmbus/rte_bus_vmbus.h          | 13 +++++------
 drivers/net/bnxt/tf_ulp/bnxt_ulp.c         |  2 +-
 drivers/net/bonding/rte_eth_bond_flow.c    |  2 +-
 drivers/net/failsafe/failsafe_flow.c       |  2 +-
 drivers/net/i40e/i40e_ethdev.c             |  9 ++++----
 drivers/net/i40e/i40e_ethdev.h             |  1 +
 drivers/net/i40e/i40e_flow.c               |  6 ++---
 drivers/net/i40e/i40e_hash.c               |  2 +-
 drivers/net/i40e/rte_pmd_i40e.c            |  6 ++---
 drivers/net/iavf/iavf_generic_flow.c       | 14 +++++------
 drivers/net/ice/ice_dcf_ethdev.c           |  1 +
 drivers/net/ice/ice_ethdev.c               |  4 ++--
 drivers/net/ice/ice_generic_flow.c         | 14 +++++------
 drivers/net/ipn3ke/ipn3ke_flow.c           |  2 +-
 drivers/net/mlx5/mlx5_flow_dv.c            |  2 +-
 drivers/net/mlx5/mlx5_flow_meter.c         |  2 +-
 drivers/net/softnic/rte_eth_softnic_flow.c |  3 ++-
 drivers/net/softnic/rte_eth_softnic_swq.c  |  2 +-
 drivers/raw/dpaa2_qdma/dpaa2_qdma.c        |  2 +-
 lib/bbdev/rte_bbdev.h                      |  2 +-
 lib/cryptodev/rte_cryptodev.h              |  2 +-
 lib/cryptodev/rte_cryptodev_pmd.h          |  2 +-
 lib/eal/common/eal_common_devargs.c        |  4 ++--
 lib/eal/common/eal_common_log.c            |  1 +
 lib/eal/common/eal_common_options.c        |  2 +-
 lib/eal/common/eal_private.h               |  1 +
 lib/eal/freebsd/include/rte_os.h           | 10 ++++++++
 lib/eal/include/rte_bus.h                  |  5 ++--
 lib/eal/include/rte_class.h                |  6 ++---
 lib/eal/include/rte_dev.h                  |  5 ++--
 lib/eal/include/rte_devargs.h              |  3 +--
 lib/eal/include/rte_log.h                  |  1 -
 lib/eal/include/rte_service.h              |  1 -
 lib/eal/include/rte_tailq.h                | 15 +++++-------
 lib/eal/linux/include/rte_os.h             | 10 ++++++++
 lib/eal/windows/eal_alarm.c                |  1 +
 lib/eal/windows/include/rte_os.h           | 27 ++++++++++++++++++++++
 lib/efd/rte_efd.c                          |  2 +-
 lib/ethdev/rte_ethdev_core.h               |  2 +-
 lib/hash/rte_fbk_hash.h                    |  1 -
 lib/hash/rte_thash.c                       |  2 ++
 lib/ip_frag/rte_ip_frag.h                  |  4 ++--
 lib/mempool/rte_mempool.c                  |  2 +-
 lib/mempool/rte_mempool.h                  |  9 ++++----
 lib/pci/rte_pci.h                          |  1 -
 lib/ring/rte_ring_core.h                   |  1 -
 lib/table/rte_swx_table.h                  |  7 +++---
 lib/table/rte_swx_table_selector.h         |  5 ++--
 lib/vhost/iotlb.c                          | 11 +++++----
 lib/vhost/rte_vdpa_dev.h                   |  2 +-
 lib/vhost/vdpa.c                           |  2 +-
 63 files changed, 175 insertions(+), 124 deletions(-)

diff --git a/drivers/bus/auxiliary/private.h b/drivers/bus/auxiliary/private.h
index 9987e8b501..d22e83cf7a 100644
--- a/drivers/bus/auxiliary/private.h
+++ b/drivers/bus/auxiliary/private.h
@@ -7,6 +7,7 @@
 
 #include <stdbool.h>
 #include <stdio.h>
+#include <sys/queue.h>
 
 #include "rte_bus_auxiliary.h"
 
diff --git a/drivers/bus/auxiliary/rte_bus_auxiliary.h b/drivers/bus/auxiliary/rte_bus_auxiliary.h
index 2462bad2ba..b1f5610404 100644
--- a/drivers/bus/auxiliary/rte_bus_auxiliary.h
+++ b/drivers/bus/auxiliary/rte_bus_auxiliary.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -113,7 +112,7 @@ typedef int (rte_auxiliary_dma_unmap_t)(struct rte_auxiliary_device *dev,
  * A structure describing an auxiliary device.
  */
 struct rte_auxiliary_device {
-	TAILQ_ENTRY(rte_auxiliary_device) next;   /**< Next probed device. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_device) next; /**< Next probed device. */
 	struct rte_device device;                 /**< Inherit core device */
 	char name[RTE_DEV_NAME_MAX_LEN + 1];      /**< ASCII device name */
 	struct rte_intr_handle intr_handle;       /**< Interrupt handle */
@@ -124,7 +123,7 @@ struct rte_auxiliary_device {
  * A structure describing an auxiliary driver.
  */
 struct rte_auxiliary_driver {
-	TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_auxiliary_driver) next; /**< Next in list. */
 	struct rte_driver driver;             /**< Inherit core driver. */
 	struct rte_auxiliary_bus *bus;        /**< Auxiliary bus reference. */
 	rte_auxiliary_match_t *match;         /**< Device match function. */
diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c
index e499305d85..6cab2ae760 100644
--- a/drivers/bus/dpaa/dpaa_bus.c
+++ b/drivers/bus/dpaa/dpaa_bus.c
@@ -105,7 +105,7 @@ dpaa_add_to_device_list(struct rte_dpaa_device *newdev)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		comp = compare_dpaa_devices(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
@@ -245,7 +245,7 @@ dpaa_clean_device_list(void)
 	struct rte_dpaa_device *dev = NULL;
 	struct rte_dpaa_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_dpaa_bus.device_list, next, tdev) {
 		TAILQ_REMOVE(&rte_dpaa_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index becc455f6b..8c8f8a298d 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -45,7 +45,7 @@ cleanup_fslmc_device_list(void)
 	struct rte_dpaa2_device *dev;
 	struct rte_dpaa2_device *t_dev;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, t_dev) {
 		TAILQ_REMOVE(&rte_fslmc_bus.device_list, dev, next);
 		free(dev);
 		dev = NULL;
@@ -82,7 +82,7 @@ insert_in_device_list(struct rte_dpaa2_device *newdev)
 	struct rte_dpaa2_device *dev = NULL;
 	struct rte_dpaa2_device *tdev = NULL;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, tdev) {
 		comp = compare_dpaa2_devname(newdev, dev);
 		if (comp < 0) {
 			TAILQ_INSERT_BEFORE(dev, newdev, next);
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index c8373e627a..852fcfc4dd 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -808,7 +808,8 @@ fslmc_vfio_process_group(void)
 	bool is_dpmcp_in_blocklist = false, is_dpio_in_blocklist = false;
 	int dpmcp_count = 0, dpio_count = 0, current_device;
 
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			dpmcp_count++;
 			if (dev->device.devargs &&
@@ -825,7 +826,8 @@ fslmc_vfio_process_group(void)
 
 	/* Search the MCP as that should be initialized first. */
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_MPORTAL) {
 			current_device++;
 			if (dev->device.devargs &&
@@ -872,7 +874,8 @@ fslmc_vfio_process_group(void)
 	}
 
 	current_device = 0;
-	TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next, dev_temp) {
+	RTE_TAILQ_FOREACH_SAFE(dev, &rte_fslmc_bus.device_list, next,
+		dev_temp) {
 		if (dev->dev_type == DPAA2_IO)
 			current_device++;
 		if (dev->device.devargs &&
diff --git a/drivers/bus/ifpga/rte_bus_ifpga.h b/drivers/bus/ifpga/rte_bus_ifpga.h
index b43084155a..a85e90d384 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga.h
+++ b/drivers/bus/ifpga/rte_bus_ifpga.h
@@ -28,9 +28,9 @@ struct rte_afu_device;
 struct rte_afu_driver;
 
 /** Double linked list of Intel FPGA AFU device. */
-TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
+RTE_TAILQ_HEAD(ifpga_afu_dev_list, rte_afu_device);
 /** Double linked list of Intel FPGA AFU device drivers. */
-TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
+RTE_TAILQ_HEAD(ifpga_afu_drv_list, rte_afu_driver);
 
 #define IFPGA_BUS_BITSTREAM_PATH_MAX_LEN 256
 
@@ -71,7 +71,7 @@ struct rte_afu_shared {
  * A structure describing a AFU device.
  */
 struct rte_afu_device {
-	TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
+	RTE_TAILQ_ENTRY(rte_afu_device) next;       /**< Next in device list. */
 	struct rte_device device;               /**< Inherit core device */
 	struct rte_rawdev *rawdev;    /**< Point Rawdev */
 	struct rte_afu_id id;                   /**< AFU id within FPGA. */
@@ -105,7 +105,7 @@ typedef int (afu_remove_t)(struct rte_afu_device *);
  * A structure describing a AFU device.
  */
 struct rte_afu_driver {
-	TAILQ_ENTRY(rte_afu_driver) next;       /**< Next afu driver. */
+	RTE_TAILQ_ENTRY(rte_afu_driver) next;   /**< Next afu driver. */
 	struct rte_driver driver;               /**< Inherit core driver. */
 	afu_probe_t *probe;                     /**< Device Probe function. */
 	afu_remove_t *remove;                   /**< Device Remove function. */
diff --git a/drivers/bus/pci/pci_params.c b/drivers/bus/pci/pci_params.c
index 3192e9c967..717388753d 100644
--- a/drivers/bus/pci/pci_params.c
+++ b/drivers/bus/pci/pci_params.c
@@ -2,6 +2,8 @@
  * Copyright 2018 Gaëtan Rivet
  */
 
+#include <sys/queue.h>
+
 #include <rte_bus.h>
 #include <rte_bus_pci.h>
 #include <rte_dev.h>
diff --git a/drivers/bus/pci/rte_bus_pci.h b/drivers/bus/pci/rte_bus_pci.h
index 583470e831..673a2850c1 100644
--- a/drivers/bus/pci/rte_bus_pci.h
+++ b/drivers/bus/pci/rte_bus_pci.h
@@ -19,7 +19,6 @@ extern "C" {
 #include <stdlib.h>
 #include <limits.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -37,16 +36,16 @@ struct rte_pci_device;
 struct rte_pci_driver;
 
 /** List of PCI devices */
-TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
+RTE_TAILQ_HEAD(rte_pci_device_list, rte_pci_device);
 /** List of PCI drivers */
-TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
+RTE_TAILQ_HEAD(rte_pci_driver_list, rte_pci_driver);
 
 /* PCI Bus iterators */
 #define FOREACH_DEVICE_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_PCIBUS(p)	\
-		TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
+		RTE_TAILQ_FOREACH(p, &(rte_pci_bus.driver_list), next)
 
 struct rte_devargs;
 
@@ -64,7 +63,7 @@ enum rte_pci_kernel_driver {
  * A structure describing a PCI device.
  */
 struct rte_pci_device {
-	TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
+	RTE_TAILQ_ENTRY(rte_pci_device) next;   /**< Next probed PCI device. */
 	struct rte_device device;           /**< Inherit core device */
 	struct rte_pci_addr addr;           /**< PCI location. */
 	struct rte_pci_id id;               /**< PCI ID. */
@@ -160,7 +159,7 @@ typedef int (pci_dma_unmap_t)(struct rte_pci_device *dev, void *addr,
  * A structure describing a PCI driver.
  */
 struct rte_pci_driver {
-	TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_pci_driver) next;  /**< Next in list. */
 	struct rte_driver driver;          /**< Inherit core driver. */
 	struct rte_pci_bus *bus;           /**< PCI bus reference. */
 	rte_pci_probe_t *probe;            /**< Device probe function. */
diff --git a/drivers/bus/pci/windows/pci.c b/drivers/bus/pci/windows/pci.c
index d39a7748b8..d7bd5d6e80 100644
--- a/drivers/bus/pci/windows/pci.c
+++ b/drivers/bus/pci/windows/pci.c
@@ -1,6 +1,9 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright 2020 Mellanox Technologies, Ltd
  */
+
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/pci/windows/pci_netuio.c b/drivers/bus/pci/windows/pci_netuio.c
index 1bf9133f71..a0b175a8fc 100644
--- a/drivers/bus/pci/windows/pci_netuio.c
+++ b/drivers/bus/pci/windows/pci_netuio.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2020 Intel Corporation.
  */
 
+#include <sys/queue.h>
+
 #include <rte_windows.h>
 #include <rte_errno.h>
 #include <rte_log.h>
diff --git a/drivers/bus/vdev/rte_bus_vdev.h b/drivers/bus/vdev/rte_bus_vdev.h
index fc315d10fa..2856799953 100644
--- a/drivers/bus/vdev/rte_bus_vdev.h
+++ b/drivers/bus/vdev/rte_bus_vdev.h
@@ -15,12 +15,11 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <rte_dev.h>
 #include <rte_devargs.h>
 
 struct rte_vdev_device {
-	TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
+	RTE_TAILQ_ENTRY(rte_vdev_device) next;      /**< Next attached vdev */
 	struct rte_device device;               /**< Inherit core device */
 };
 
@@ -53,7 +52,7 @@ rte_vdev_device_args(const struct rte_vdev_device *dev)
 }
 
 /** Double linked list of virtual device drivers. */
-TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
+RTE_TAILQ_HEAD(vdev_driver_list, rte_vdev_driver);
 
 /**
  * Probe function called for each virtual device driver once.
@@ -107,7 +106,7 @@ typedef int (rte_vdev_dma_unmap_t)(struct rte_vdev_device *dev, void *addr,
  * A virtual device driver abstraction.
  */
 struct rte_vdev_driver {
-	TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vdev_driver) next; /**< Next in list. */
 	struct rte_driver driver;        /**< Inherited general driver. */
 	rte_vdev_probe_t *probe;         /**< Virtual device probe function. */
 	rte_vdev_remove_t *remove;       /**< Virtual device remove function. */
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index 281a2c34e8..a8d8b2327e 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -100,7 +100,8 @@ rte_vdev_remove_custom_scan(rte_vdev_scan_callback callback, void *user_arg)
 	struct vdev_custom_scan *custom_scan, *tmp_scan;
 
 	rte_spinlock_lock(&vdev_custom_scan_lock);
-	TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next, tmp_scan) {
+	RTE_TAILQ_FOREACH_SAFE(custom_scan, &vdev_custom_scans, next,
+				tmp_scan) {
 		if (custom_scan->callback != callback ||
 				(custom_scan->user_arg != (void *)-1 &&
 				custom_scan->user_arg != user_arg))
diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h
index 4cf73ce815..6bcff66468 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus.h
+++ b/drivers/bus/vmbus/rte_bus_vmbus.h
@@ -20,7 +20,6 @@ extern "C" {
 #include <limits.h>
 #include <stdbool.h>
 #include <errno.h>
-#include <sys/queue.h>
 #include <stdint.h>
 #include <inttypes.h>
 
@@ -38,15 +37,15 @@ struct rte_vmbus_bus;
 struct vmbus_channel;
 struct vmbus_mon_page;
 
-TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
-TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
+RTE_TAILQ_HEAD(rte_vmbus_device_list, rte_vmbus_device);
+RTE_TAILQ_HEAD(rte_vmbus_driver_list, rte_vmbus_driver);
 
 /* VMBus iterators */
 #define FOREACH_DEVICE_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.device_list), next)
 
 #define FOREACH_DRIVER_ON_VMBUS(p)	\
-	TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
+	RTE_TAILQ_FOREACH(p, &(rte_vmbus_bus.driver_list), next)
 
 /** Maximum number of VMBUS resources. */
 enum hv_uio_map {
@@ -62,7 +61,7 @@ enum hv_uio_map {
  * A structure describing a VMBUS device.
  */
 struct rte_vmbus_device {
-	TAILQ_ENTRY(rte_vmbus_device) next;    /**< Next probed VMBUS device */
+	RTE_TAILQ_ENTRY(rte_vmbus_device) next; /**< Next probed VMBUS device */
 	const struct rte_vmbus_driver *driver; /**< Associated driver */
 	struct rte_device device;              /**< Inherit core device */
 	rte_uuid_t device_id;		       /**< VMBUS device id */
@@ -93,7 +92,7 @@ typedef int (vmbus_remove_t)(struct rte_vmbus_device *);
  * A structure describing a VMBUS driver.
  */
 struct rte_vmbus_driver {
-	TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_vmbus_driver) next; /**< Next in list. */
 	struct rte_driver driver;
 	struct rte_vmbus_bus *bus;          /**< VM bus reference. */
 	vmbus_probe_t *probe;               /**< Device Probe function. */
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
index dbf85e4eda..ac86b70caf 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_ulp.c
@@ -2018,7 +2018,7 @@ bnxt_ulp_cntxt_list_del(struct bnxt_ulp_context *ulp_ctx)
 	struct ulp_context_list_entry	*entry, *temp;
 
 	rte_spinlock_lock(&bnxt_ulp_ctxt_lock);
-	TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(entry, &ulp_cntx_list, next, temp) {
 		if (entry->ulp_ctx == ulp_ctx) {
 			TAILQ_REMOVE(&ulp_cntx_list, entry, next);
 			rte_free(entry);
diff --git a/drivers/net/bonding/rte_eth_bond_flow.c b/drivers/net/bonding/rte_eth_bond_flow.c
index 417f76bf60..65b77faae7 100644
--- a/drivers/net/bonding/rte_eth_bond_flow.c
+++ b/drivers/net/bonding/rte_eth_bond_flow.c
@@ -157,7 +157,7 @@ bond_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *err)
 	/* Destroy all bond flows from its slaves instead of flushing them to
 	 * keep the LACP flow or any other external flows.
 	 */
-	TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &internals->flow_list, next, tmp) {
 		lret = bond_flow_destroy(dev, flow, err);
 		if (unlikely(lret != 0))
 			ret = lret;
diff --git a/drivers/net/failsafe/failsafe_flow.c b/drivers/net/failsafe/failsafe_flow.c
index 5e2b5f7c67..354f9fec20 100644
--- a/drivers/net/failsafe/failsafe_flow.c
+++ b/drivers/net/failsafe/failsafe_flow.c
@@ -180,7 +180,7 @@ fs_flow_flush(struct rte_eth_dev *dev,
 			return ret;
 		}
 	}
-	TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &PRIV(dev)->flow_list, next, tmp) {
 		TAILQ_REMOVE(&PRIV(dev)->flow_list, flow, next);
 		fs_flow_release(&flow);
 	}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed1..6590363556 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -5436,7 +5436,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* VSI has child to attach, release child first */
 	if (vsi->veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->veb->head, list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5444,7 +5444,8 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 	}
 
 	if (vsi->floating_veb) {
-		TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head, list, temp) {
+		RTE_TAILQ_FOREACH_SAFE(vsi_list, &vsi->floating_veb->head,
+			list, temp) {
 			if (i40e_vsi_release(vsi_list->vsi) != I40E_SUCCESS)
 				return -1;
 		}
@@ -5452,7 +5453,7 @@ i40e_vsi_release(struct i40e_vsi *vsi)
 
 	/* Remove all macvlan filters of the VSI */
 	i40e_vsi_remove_all_macvlan_filter(vsi);
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		rte_free(f);
 
 	if (vsi->type != I40E_VSI_MAIN &&
@@ -6055,7 +6056,7 @@ i40e_vsi_config_vlan_filter(struct i40e_vsi *vsi, bool on)
 	i = 0;
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		mac_filter[i] = f->mac_info;
 		ret = i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr);
 		if (ret) {
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60..374b73e4a7 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -6,6 +6,7 @@
 #define _I40E_ETHDEV_H_
 
 #include <stdint.h>
+#include <sys/queue.h>
 
 #include <rte_time.h>
 #include <rte_kvargs.h>
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c..e41a84f1d7 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -4917,7 +4917,7 @@ i40e_flow_flush_fdir_filter(struct i40e_pf *pf)
 		}
 
 		/* Delete FDIR flows in flow list. */
-		TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 			if (flow->filter_type == RTE_ETH_FILTER_FDIR) {
 				TAILQ_REMOVE(&pf->flow_list, flow, node);
 			}
@@ -4972,7 +4972,7 @@ i40e_flow_flush_ethertype_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete ethertype flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_ETHERTYPE) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
@@ -5000,7 +5000,7 @@ i40e_flow_flush_tunnel_filter(struct i40e_pf *pf)
 	}
 
 	/* Delete tunnel flows in flow list. */
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, temp) {
 		if (flow->filter_type == RTE_ETH_FILTER_TUNNEL) {
 			TAILQ_REMOVE(&pf->flow_list, flow, node);
 			rte_free(flow);
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfc..6579b1a00b 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -1366,7 +1366,7 @@ i40e_hash_filter_flush(struct i40e_pf *pf)
 {
 	struct rte_flow *flow, *next;
 
-	TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &pf->flow_list, node, next) {
 		if (flow->filter_type != RTE_ETH_FILTER_HASH)
 			continue;
 
diff --git a/drivers/net/i40e/rte_pmd_i40e.c b/drivers/net/i40e/rte_pmd_i40e.c
index 2e34140c5b..ec24046440 100644
--- a/drivers/net/i40e/rte_pmd_i40e.c
+++ b/drivers/net/i40e/rte_pmd_i40e.c
@@ -216,7 +216,7 @@ i40e_vsi_rm_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* remove all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		vlan_num = vsi->vlan_num;
 		filter_type = f->mac_info.filter_type;
 		if (filter_type == I40E_MACVLAN_PERFECT_MATCH ||
@@ -274,7 +274,7 @@ i40e_vsi_restore_mac_filter(struct i40e_vsi *vsi)
 	void *temp;
 
 	/* restore all the MACs */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp) {
 		if (f->mac_info.filter_type == I40E_MACVLAN_PERFECT_MATCH ||
 		    f->mac_info.filter_type == I40E_MACVLAN_HASH_MATCH) {
 			/**
@@ -563,7 +563,7 @@ rte_pmd_i40e_set_vf_mac_addr(uint16_t port, uint16_t vf_id,
 	rte_ether_addr_copy(mac_addr, &vf->mac_addr);
 
 	/* Remove all existing mac */
-	TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
+	RTE_TAILQ_FOREACH_SAFE(f, &vsi->mac_list, next, temp)
 		if (i40e_vsi_delete_mac(vsi, &f->mac_info.mac_addr)
 				!= I40E_SUCCESS)
 			PMD_DRV_LOG(WARNING, "Delete MAC failed");
diff --git a/drivers/net/iavf/iavf_generic_flow.c b/drivers/net/iavf/iavf_generic_flow.c
index 1fe270fb22..b86d99e57d 100644
--- a/drivers/net/iavf/iavf_generic_flow.c
+++ b/drivers/net/iavf/iavf_generic_flow.c
@@ -1637,7 +1637,7 @@ iavf_flow_init(struct iavf_adapter *ad)
 	TAILQ_INIT(&vf->dist_parser_list);
 	rte_spinlock_init(&vf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 				     engine->type);
@@ -1663,7 +1663,7 @@ iavf_flow_uninit(struct iavf_adapter *ad)
 	struct iavf_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1733,7 +1733,7 @@ iavf_unregister_parser(struct iavf_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -1917,7 +1917,7 @@ iavf_parse_engine_create(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -1946,7 +1946,7 @@ iavf_parse_engine_validate(struct iavf_adapter *ad,
 	void *temp;
 	void *meta = NULL;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2089,7 +2089,7 @@ iavf_flow_is_valid(struct rte_flow *flow)
 	void *temp;
 
 	if (flow && flow->engine) {
-		TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+		RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 			if (engine == flow->engine)
 				return true;
 		}
@@ -2142,7 +2142,7 @@ iavf_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &vf->flow_list, node, temp) {
 		ret = iavf_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da87..629e88980d 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -4,6 +4,7 @@
 
 #include <errno.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 #include <sys/types.h>
 #include <unistd.h>
 
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954..fadd5f2e5a 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1104,7 +1104,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (!vsi || !vsi->mac_num)
 		return -EINVAL;
 
-	TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(m_f, &vsi->mac_list, next, temp) {
 		ret = ice_remove_mac_filter(vsi, &m_f->mac_info.mac_addr);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
@@ -1115,7 +1115,7 @@ ice_remove_all_mac_vlan_filters(struct ice_vsi *vsi)
 	if (vsi->vlan_num == 0)
 		return 0;
 
-	TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(v_f, &vsi->vlan_list, next, temp) {
 		ret = ice_remove_vlan_filter(vsi, &v_f->vlan_info.vlan);
 		if (ret != ICE_SUCCESS) {
 			ret = -EINVAL;
diff --git a/drivers/net/ice/ice_generic_flow.c b/drivers/net/ice/ice_generic_flow.c
index 66b5743abf..3e557efe0c 100644
--- a/drivers/net/ice/ice_generic_flow.c
+++ b/drivers/net/ice/ice_generic_flow.c
@@ -1820,7 +1820,7 @@ ice_flow_init(struct ice_adapter *ad)
 	TAILQ_INIT(&pf->dist_parser_list);
 	rte_spinlock_init(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->init == NULL) {
 			PMD_INIT_LOG(ERR, "Invalid engine type (%d)",
 					engine->type);
@@ -1846,7 +1846,7 @@ ice_flow_uninit(struct ice_adapter *ad)
 	struct ice_flow_parser_node *p_parser;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(engine, &engine_list, node, temp) {
 		if (engine->uninit)
 			engine->uninit(ad);
 	}
@@ -1946,7 +1946,7 @@ ice_unregister_parser(struct ice_flow_parser *parser,
 	if (list == NULL)
 		return;
 
-	TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_parser, list, node, temp) {
 		if (p_parser->parser->engine->type == parser->engine->type) {
 			TAILQ_REMOVE(list, p_parser, node);
 			rte_free(p_parser);
@@ -2272,7 +2272,7 @@ ice_parse_engine_create(struct ice_adapter *ad,
 	void *meta = NULL;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		int ret;
 
 		if (parser_node->parser->parse_pattern_action(ad,
@@ -2305,7 +2305,7 @@ ice_parse_engine_validate(struct ice_adapter *ad,
 	struct ice_flow_parser_node *parser_node;
 	void *temp;
 
-	TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(parser_node, parser_list, node, temp) {
 		if (parser_node->parser->parse_pattern_action(ad,
 				parser_node->parser->array,
 				parser_node->parser->array_len,
@@ -2477,7 +2477,7 @@ ice_flow_flush(struct rte_eth_dev *dev,
 	void *temp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		ret = ice_flow_destroy(dev, p_flow, error);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "Failed to flush flows");
@@ -2541,7 +2541,7 @@ ice_flow_redirect(struct ice_adapter *ad,
 
 	rte_spinlock_lock(&pf->flow_ops_lock);
 
-	TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
+	RTE_TAILQ_FOREACH_SAFE(p_flow, &pf->flow_list, node, temp) {
 		if (!p_flow->engine->redirect)
 			continue;
 		ret = p_flow->engine->redirect(ad, p_flow, rd);
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index c702e19ea5..f5867ca055 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -1231,7 +1231,7 @@ ipn3ke_flow_flush(struct rte_eth_dev *dev,
 	struct ipn3ke_hw *hw = IPN3KE_DEV_PRIVATE_TO_HW(dev);
 	struct rte_flow *flow, *temp;
 
-	TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(flow, &hw->flow_list, next, temp) {
 		TAILQ_REMOVE(&hw->flow_list, flow, next);
 		rte_free(flow);
 	}
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 31d857030f..ba2bf4de37 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -15099,7 +15099,7 @@ __flow_dv_destroy_sub_policy_rules(struct rte_eth_dev *dev,
 		    policy->act_cnt[i].fate_action == MLX5_FLOW_FATE_MTR)
 			next_fm = mlx5_flow_meter_find(priv,
 					policy->act_cnt[i].next_mtr_id, NULL);
-		TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
+		RTE_TAILQ_FOREACH_SAFE(color_rule, &sub_policy->color_rules[i],
 				   next_port, tmp) {
 			claim_zero(mlx5_flow_os_destroy_flow(color_rule->rule));
 			tbl = container_of(color_rule->matcher->tbl,
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index a24bd9c7ae..ba4e9fca17 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -2168,7 +2168,7 @@ mlx5_flow_meter_flush(struct rte_eth_dev *dev, struct rte_mtr_error *error)
 			priv->mtr_idx_tbl = NULL;
 		}
 	} else {
-		TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
+		RTE_TAILQ_FOREACH_SAFE(legacy_fm, fms, next, tmp) {
 			fm = &legacy_fm->fm;
 			if (mlx5_flow_meter_params_flush(dev, fm, 0))
 				return -rte_mtr_error_set(error, EINVAL,
diff --git a/drivers/net/softnic/rte_eth_softnic_flow.c b/drivers/net/softnic/rte_eth_softnic_flow.c
index 27eaf380cd..7d054c38d2 100644
--- a/drivers/net/softnic/rte_eth_softnic_flow.c
+++ b/drivers/net/softnic/rte_eth_softnic_flow.c
@@ -2207,7 +2207,8 @@ pmd_flow_flush(struct rte_eth_dev *dev,
 			void *temp;
 			int status;
 
-			TAILQ_FOREACH_SAFE(flow, &table->flows, node, temp) {
+			RTE_TAILQ_FOREACH_SAFE(flow, &table->flows, node,
+				temp) {
 				/* Rule delete. */
 				status = softnic_pipeline_table_rule_delete
 						(softnic,
diff --git a/drivers/net/softnic/rte_eth_softnic_swq.c b/drivers/net/softnic/rte_eth_softnic_swq.c
index 2083d0a976..afe6f05e29 100644
--- a/drivers/net/softnic/rte_eth_softnic_swq.c
+++ b/drivers/net/softnic/rte_eth_softnic_swq.c
@@ -39,7 +39,7 @@ softnic_softnic_swq_free_keep_rxq_txq(struct pmd_internals *p)
 {
 	struct softnic_swq *swq, *tswq;
 
-	TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
+	RTE_TAILQ_FOREACH_SAFE(swq, &p->swq_list, node, tswq) {
 		if ((strncmp(swq->name, "RXQ", strlen("RXQ")) == 0) ||
 			(strncmp(swq->name, "TXQ", strlen("TXQ")) == 0))
 			continue;
diff --git a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
index c961e18d67..7b80370b36 100644
--- a/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
+++ b/drivers/raw/dpaa2_qdma/dpaa2_qdma.c
@@ -1606,7 +1606,7 @@ remove_hw_queues_from_list(struct dpaa2_dpdmai_dev *dpdmai_dev)
 
 	DPAA2_QDMA_FUNC_TRACE();
 
-	TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
+	RTE_TAILQ_FOREACH_SAFE(queue, &qdma_queue_list, next, tqueue) {
 		if (queue->dpdmai_dev == dpdmai_dev) {
 			TAILQ_REMOVE(&qdma_queue_list, queue, next);
 			rte_free(queue);
diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
index 7017124414..3ebf62e697 100644
--- a/lib/bbdev/rte_bbdev.h
+++ b/lib/bbdev/rte_bbdev.h
@@ -434,7 +434,7 @@ struct rte_bbdev_callback;
 struct rte_intr_handle;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
+RTE_TAILQ_HEAD(rte_bbdev_cb_list, rte_bbdev_callback);
 
 /**
  * @internal The data structure associated with a device. Drivers can access
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 11f4e6fdbf..f86bf2260b 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -879,7 +879,7 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 struct rte_cryptodev_callback;
 
 /** Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
+RTE_TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);
 
 /**
  * Structure used to hold information about the callbacks to be called for a
diff --git a/lib/cryptodev/rte_cryptodev_pmd.h b/lib/cryptodev/rte_cryptodev_pmd.h
index 1274436870..9542cbf263 100644
--- a/lib/cryptodev/rte_cryptodev_pmd.h
+++ b/lib/cryptodev/rte_cryptodev_pmd.h
@@ -66,7 +66,7 @@ struct rte_cryptodev_global {
 
 /* Cryptodev driver, containing the driver ID */
 struct cryptodev_driver {
-	TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
+	RTE_TAILQ_ENTRY(cryptodev_driver) next; /**< Next in list. */
 	const struct rte_driver *driver;
 	uint8_t id;
 };
diff --git a/lib/eal/common/eal_common_devargs.c b/lib/eal/common/eal_common_devargs.c
index 23aaf8b7e4..2e2f35c47e 100644
--- a/lib/eal/common/eal_common_devargs.c
+++ b/lib/eal/common/eal_common_devargs.c
@@ -291,7 +291,7 @@ rte_devargs_insert(struct rte_devargs **da)
 	if (*da == NULL || (*da)->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(listed_da, &devargs_list, next, tmp) {
 		if (listed_da == *da)
 			/* devargs already in the list */
 			return 0;
@@ -358,7 +358,7 @@ rte_devargs_remove(struct rte_devargs *devargs)
 	if (devargs == NULL || devargs->bus == NULL)
 		return -1;
 
-	TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(d, &devargs_list, next, tmp) {
 		if (strcmp(d->bus->name, devargs->bus->name) == 0 &&
 		    strcmp(d->name, devargs->name) == 0) {
 			TAILQ_REMOVE(&devargs_list, d, next);
diff --git a/lib/eal/common/eal_common_log.c b/lib/eal/common/eal_common_log.c
index ec8fe23a7f..1be35f5397 100644
--- a/lib/eal/common/eal_common_log.c
+++ b/lib/eal/common/eal_common_log.c
@@ -10,6 +10,7 @@
 #include <errno.h>
 #include <regex.h>
 #include <fnmatch.h>
+#include <sys/queue.h>
 
 #include <rte_eal.h>
 #include <rte_log.h>
diff --git a/lib/eal/common/eal_common_options.c b/lib/eal/common/eal_common_options.c
index ff5861b5f3..24f5ceaab0 100644
--- a/lib/eal/common/eal_common_options.c
+++ b/lib/eal/common/eal_common_options.c
@@ -283,7 +283,7 @@ eal_option_device_parse(void)
 	void *tmp;
 	int ret = 0;
 
-	TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
+	RTE_TAILQ_FOREACH_SAFE(devopt, &devopt_list, next, tmp) {
 		if (ret == 0) {
 			ret = rte_devargs_add(devopt->type, devopt->arg);
 			if (ret)
diff --git a/lib/eal/common/eal_private.h b/lib/eal/common/eal_private.h
index 64cf4e81c8..86dab1f057 100644
--- a/lib/eal/common/eal_private.h
+++ b/lib/eal/common/eal_private.h
@@ -8,6 +8,7 @@
 #include <stdbool.h>
 #include <stdint.h>
 #include <stdio.h>
+#include <sys/queue.h>
 
 #include <rte_dev.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/freebsd/include/rte_os.h b/lib/eal/freebsd/include/rte_os.h
index 627f0483ab..9d8a69008c 100644
--- a/lib/eal/freebsd/include/rte_os.h
+++ b/lib/eal/freebsd/include/rte_os.h
@@ -11,6 +11,16 @@
  */
 
 #include <pthread_np.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
 
 typedef cpuset_t rte_cpuset_t;
 #define RTE_HAS_CPUSET
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index 80b154fb98..84d364df3f 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -19,13 +19,12 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_log.h>
 #include <rte_dev.h>
 
 /** Double linked list of buses */
-TAILQ_HEAD(rte_bus_list, rte_bus);
+RTE_TAILQ_HEAD(rte_bus_list, rte_bus);
 
 
 /**
@@ -250,7 +249,7 @@ typedef enum rte_iova_mode (*rte_bus_get_iommu_class_t)(void);
  * A structure describing a generic bus.
  */
 struct rte_bus {
-	TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
+	RTE_TAILQ_ENTRY(rte_bus) next;   /**< Next bus object in linked list */
 	const char *name;            /**< Name of the bus */
 	rte_bus_scan_t scan;         /**< Scan for devices attached to bus */
 	rte_bus_probe_t probe;       /**< Probe devices on bus */
diff --git a/lib/eal/include/rte_class.h b/lib/eal/include/rte_class.h
index 856d09b22d..d560339652 100644
--- a/lib/eal/include/rte_class.h
+++ b/lib/eal/include/rte_class.h
@@ -22,18 +22,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
-
 #include <rte_dev.h>
 
 /** Double linked list of classes */
-TAILQ_HEAD(rte_class_list, rte_class);
+RTE_TAILQ_HEAD(rte_class_list, rte_class);
 
 /**
  * A structure describing a generic device class.
  */
 struct rte_class {
-	TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
+	RTE_TAILQ_ENTRY(rte_class) next; /**< Next device class in linked list */
 	const char *name; /**< Name of the class */
 	rte_dev_iterate_t dev_iterate; /**< Device iterator. */
 };
diff --git a/lib/eal/include/rte_dev.h b/lib/eal/include/rte_dev.h
index 6dd72c11a1..f6efe0c94e 100644
--- a/lib/eal/include/rte_dev.h
+++ b/lib/eal/include/rte_dev.h
@@ -18,7 +18,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_compat.h>
@@ -75,7 +74,7 @@ struct rte_mem_resource {
  * A structure describing a device driver.
  */
 struct rte_driver {
-	TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
+	RTE_TAILQ_ENTRY(rte_driver) next;  /**< Next in list. */
 	const char *name;                   /**< Driver name. */
 	const char *alias;              /**< Driver alias. */
 };
@@ -90,7 +89,7 @@ struct rte_driver {
  * A structure describing a generic device.
  */
 struct rte_device {
-	TAILQ_ENTRY(rte_device) next; /**< Next device */
+	RTE_TAILQ_ENTRY(rte_device) next; /**< Next device */
 	const char *name;             /**< Device name */
 	const struct rte_driver *driver; /**< Driver assigned after probing */
 	const struct rte_bus *bus;    /**< Bus handle assigned on scan */
diff --git a/lib/eal/include/rte_devargs.h b/lib/eal/include/rte_devargs.h
index cd90944fe8..957477b398 100644
--- a/lib/eal/include/rte_devargs.h
+++ b/lib/eal/include/rte_devargs.h
@@ -21,7 +21,6 @@ extern "C" {
 #endif
 
 #include <stdio.h>
-#include <sys/queue.h>
 #include <rte_compat.h>
 #include <rte_bus.h>
 
@@ -76,7 +75,7 @@ enum rte_devtype {
  */
 struct rte_devargs {
 	/** Next in list. */
-	TAILQ_ENTRY(rte_devargs) next;
+	RTE_TAILQ_ENTRY(rte_devargs) next;
 	/** Type of device. */
 	enum rte_devtype type;
 	/** Device policy. */
diff --git a/lib/eal/include/rte_log.h b/lib/eal/include/rte_log.h
index b706bb8710..bb3523467b 100644
--- a/lib/eal/include/rte_log.h
+++ b/lib/eal/include/rte_log.h
@@ -21,7 +21,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdarg.h>
 #include <stdbool.h>
-#include <sys/queue.h>
 
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/eal/include/rte_service.h b/lib/eal/include/rte_service.h
index c7d037d862..1c9275c32a 100644
--- a/lib/eal/include/rte_service.h
+++ b/lib/eal/include/rte_service.h
@@ -29,7 +29,6 @@ extern "C" {
 
 #include<stdio.h>
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_lcore.h>
diff --git a/lib/eal/include/rte_tailq.h b/lib/eal/include/rte_tailq.h
index b6fe4e5f78..0f67f9e4db 100644
--- a/lib/eal/include/rte_tailq.h
+++ b/lib/eal/include/rte_tailq.h
@@ -15,17 +15,16 @@
 extern "C" {
 #endif
 
-#include <sys/queue.h>
 #include <stdio.h>
 #include <rte_debug.h>
 
 /** dummy structure type used by the rte_tailq APIs */
 struct rte_tailq_entry {
-	TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
+	RTE_TAILQ_ENTRY(rte_tailq_entry) next; /**< Pointer entries for a tailq list */
 	void *data; /**< Pointer to the data referenced by this tailq entry */
 };
 /** dummy */
-TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
+RTE_TAILQ_HEAD(rte_tailq_entry_head, rte_tailq_entry);
 
 #define RTE_TAILQ_NAMESIZE 32
 
@@ -48,7 +47,7 @@ struct rte_tailq_elem {
 	 * rte_eal_tailqs_init()
 	 */
 	struct rte_tailq_head *head;
-	TAILQ_ENTRY(rte_tailq_elem) next;
+	RTE_TAILQ_ENTRY(rte_tailq_elem) next;
 	const char name[RTE_TAILQ_NAMESIZE];
 };
 
@@ -126,12 +125,10 @@ RTE_INIT(tailqinitfn_ ##t) \
 }
 
 /* This macro permits both remove and free var within the loop safely.*/
-#ifndef TAILQ_FOREACH_SAFE
-#define TAILQ_FOREACH_SAFE(var, head, field, tvar)		\
-	for ((var) = TAILQ_FIRST((head));			\
-	    (var) && ((tvar) = TAILQ_NEXT((var), field), 1);	\
+#define RTE_TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+	for ((var) = RTE_TAILQ_FIRST((head)); \
+	    (var) && ((tvar) = RTE_TAILQ_NEXT((var), field), 1); \
 	    (var) = (tvar))
-#endif
 
 #ifdef __cplusplus
 }
diff --git a/lib/eal/linux/include/rte_os.h b/lib/eal/linux/include/rte_os.h
index 1618b4df22..35c07c70cb 100644
--- a/lib/eal/linux/include/rte_os.h
+++ b/lib/eal/linux/include/rte_os.h
@@ -11,6 +11,16 @@
  */
 
 #include <sched.h>
+#include <sys/queue.h>
+
+/* These macros are compatible with system's sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) TAILQ_HEAD(name, type)
+#define RTE_TAILQ_ENTRY(type) TAILQ_ENTRY(type)
+#define RTE_TAILQ_FOREACH(var, head, field) TAILQ_FOREACH(var, head, field)
+#define RTE_TAILQ_FIRST(head) TAILQ_FIRST(head)
+#define RTE_TAILQ_NEXT(elem, field) TAILQ_NEXT(elem, field)
+#define RTE_STAILQ_HEAD(name, type) STAILQ_HEAD(name, type)
+#define RTE_STAILQ_ENTRY(type) STAILQ_ENTRY(type)
 
 #ifdef CPU_SETSIZE /* may require _GNU_SOURCE */
 typedef cpu_set_t rte_cpuset_t;
diff --git a/lib/eal/windows/eal_alarm.c b/lib/eal/windows/eal_alarm.c
index e5dc54efb8..103c1f909d 100644
--- a/lib/eal/windows/eal_alarm.c
+++ b/lib/eal/windows/eal_alarm.c
@@ -4,6 +4,7 @@
 
 #include <stdatomic.h>
 #include <stdbool.h>
+#include <sys/queue.h>
 
 #include <rte_alarm.h>
 #include <rte_spinlock.h>
diff --git a/lib/eal/windows/include/rte_os.h b/lib/eal/windows/include/rte_os.h
index 66c711d458..a0a311495e 100644
--- a/lib/eal/windows/include/rte_os.h
+++ b/lib/eal/windows/include/rte_os.h
@@ -18,6 +18,33 @@
 extern "C" {
 #endif
 
+/* These macros are compatible with bundled sys/queue.h. */
+#define RTE_TAILQ_HEAD(name, type) \
+struct name { \
+	struct type *tqh_first; \
+	struct type **tqh_last; \
+}
+#define RTE_TAILQ_ENTRY(type) \
+struct { \
+	struct type *tqe_next; \
+	struct type **tqe_prev; \
+}
+#define RTE_TAILQ_FOREACH(var, head, field) \
+	for ((var) = RTE_TAILQ_FIRST((head)); \
+	    (var); \
+	    (var) = RTE_TAILQ_NEXT((var), field))
+#define RTE_TAILQ_FIRST(head) ((head)->tqh_first)
+#define RTE_TAILQ_NEXT(elm, field) ((elm)->field.tqe_next)
+#define RTE_STAILQ_HEAD(name, type) \
+struct name { \
+	struct type *stqh_first; \
+	struct type **stqh_last; \
+}
+#define RTE_STAILQ_ENTRY(type) \
+struct { \
+	struct type *stqe_next; \
+}
+
 /* cpu_set macros implementation */
 #define RTE_CPU_AND(dst, src1, src2) CPU_AND(dst, src1, src2)
 #define RTE_CPU_OR(dst, src1, src2) CPU_OR(dst, src1, src2)
diff --git a/lib/efd/rte_efd.c b/lib/efd/rte_efd.c
index 77f46809f8..5bf517fee9 100644
--- a/lib/efd/rte_efd.c
+++ b/lib/efd/rte_efd.c
@@ -759,7 +759,7 @@ rte_efd_free(struct rte_efd_table *table)
 	efd_list = RTE_TAILQ_CAST(rte_efd_tailq.head, rte_efd_list);
 	rte_mcfg_tailq_write_lock();
 
-	TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
+	RTE_TAILQ_FOREACH_SAFE(te, efd_list, next, temp) {
 		if (te->data == (void *) table) {
 			TAILQ_REMOVE(efd_list, te, next);
 			rte_free(te);
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc..d2c9ec42c7 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -21,7 +21,7 @@
 
 struct rte_eth_dev_callback;
 /** @internal Structure to keep track of registered callbacks */
-TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
+RTE_TAILQ_HEAD(rte_eth_dev_cb_list, rte_eth_dev_callback);
 
 struct rte_eth_dev;
 
diff --git a/lib/hash/rte_fbk_hash.h b/lib/hash/rte_fbk_hash.h
index c4d6976d2b..9c3a61c1d6 100644
--- a/lib/hash/rte_fbk_hash.h
+++ b/lib/hash/rte_fbk_hash.h
@@ -17,7 +17,6 @@
 
 #include <stdint.h>
 #include <errno.h>
-#include <sys/queue.h>
 
 #ifdef __cplusplus
 extern "C" {
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index d5a95a6e00..696a1121e2 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -2,6 +2,8 @@
  * Copyright(c) 2021 Intel Corporation
  */
 
+#include <sys/queue.h>
+
 #include <rte_thash.h>
 #include <rte_tailq.h>
 #include <rte_random.h>
diff --git a/lib/ip_frag/rte_ip_frag.h b/lib/ip_frag/rte_ip_frag.h
index 0bfe64b14e..80f931c32a 100644
--- a/lib/ip_frag/rte_ip_frag.h
+++ b/lib/ip_frag/rte_ip_frag.h
@@ -62,7 +62,7 @@ struct ip_frag_key {
  * First two entries in the frags[] array are for the last and first fragments.
  */
 struct ip_frag_pkt {
-	TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
+	RTE_TAILQ_ENTRY(ip_frag_pkt) lru;   /**< LRU list */
 	struct ip_frag_key key;           /**< fragmentation key */
 	uint64_t             start;       /**< creation timestamp */
 	uint32_t             total_size;  /**< expected reassembled size */
@@ -83,7 +83,7 @@ struct rte_ip_frag_death_row {
 	/**< mbufs to be freed */
 };
 
-TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
+RTE_TAILQ_HEAD(ip_pkt_list, ip_frag_pkt); /**< @internal fragments tailq */
 
 /** fragmentation table statistics */
 struct ip_frag_tbl_stat {
diff --git a/lib/mempool/rte_mempool.c b/lib/mempool/rte_mempool.c
index 59a588425b..c5f859ae71 100644
--- a/lib/mempool/rte_mempool.c
+++ b/lib/mempool/rte_mempool.c
@@ -1337,7 +1337,7 @@ void rte_mempool_walk(void (*func)(struct rte_mempool *, void *),
 
 	rte_mcfg_mempool_read_lock();
 
-	TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
+	RTE_TAILQ_FOREACH_SAFE(te, mempool_list, next, tmp_te) {
 		(*func)((struct rte_mempool *) te->data, arg);
 	}
 
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 4235d6f0bf..f57ecbd6fc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -38,7 +38,6 @@
 #include <stdint.h>
 #include <errno.h>
 #include <inttypes.h>
-#include <sys/queue.h>
 
 #include <rte_config.h>
 #include <rte_spinlock.h>
@@ -141,7 +140,7 @@ struct rte_mempool_objsz {
  * double-frees.
  */
 struct rte_mempool_objhdr {
-	STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_objhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;          /**< The mempool owning the object. */
 	rte_iova_t iova;                 /**< IO address of the object. */
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
@@ -152,7 +151,7 @@ struct rte_mempool_objhdr {
 /**
  * A list of object headers type
  */
-STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
+RTE_STAILQ_HEAD(rte_mempool_objhdr_list, rte_mempool_objhdr);
 
 #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
 
@@ -171,7 +170,7 @@ struct rte_mempool_objtlr {
 /**
  * A list of memory where objects are stored
  */
-STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
+RTE_STAILQ_HEAD(rte_mempool_memhdr_list, rte_mempool_memhdr);
 
 /**
  * Callback used to free a memory chunk
@@ -186,7 +185,7 @@ typedef void (rte_mempool_memchunk_free_cb_t)(struct rte_mempool_memhdr *memhdr,
  * and physically contiguous.
  */
 struct rte_mempool_memhdr {
-	STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
+	RTE_STAILQ_ENTRY(rte_mempool_memhdr) next; /**< Next in list. */
 	struct rte_mempool *mp;  /**< The mempool owning the chunk */
 	void *addr;              /**< Virtual address of the chunk */
 	rte_iova_t iova;         /**< IO address of the chunk */
diff --git a/lib/pci/rte_pci.h b/lib/pci/rte_pci.h
index 1f33d687f4..71cbd441c7 100644
--- a/lib/pci/rte_pci.h
+++ b/lib/pci/rte_pci.h
@@ -18,7 +18,6 @@ extern "C" {
 
 #include <stdio.h>
 #include <limits.h>
-#include <sys/queue.h>
 #include <inttypes.h>
 #include <sys/types.h>
 
diff --git a/lib/ring/rte_ring_core.h b/lib/ring/rte_ring_core.h
index 16718ca7f1..43ce1a29d4 100644
--- a/lib/ring/rte_ring_core.h
+++ b/lib/ring/rte_ring_core.h
@@ -26,7 +26,6 @@ extern "C" {
 #include <stdio.h>
 #include <stdint.h>
 #include <string.h>
-#include <sys/queue.h>
 #include <errno.h>
 #include <rte_common.h>
 #include <rte_config.h>
diff --git a/lib/table/rte_swx_table.h b/lib/table/rte_swx_table.h
index e23f2304c6..f93e5f3f95 100644
--- a/lib/table/rte_swx_table.h
+++ b/lib/table/rte_swx_table.h
@@ -16,7 +16,8 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
+
+#include <rte_os.h>
 
 /** Match type. */
 enum rte_swx_table_match_type {
@@ -68,7 +69,7 @@ struct rte_swx_table_entry {
 	/** Used to facilitate the membership of this table entry to a
 	 * linked list.
 	 */
-	TAILQ_ENTRY(rte_swx_table_entry) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_entry) node;
 
 	/** Key value for the current entry. Array of *key_size* bytes or NULL
 	 * if the *key_size* for the current table is 0.
@@ -111,7 +112,7 @@ struct rte_swx_table_entry {
 };
 
 /** List of table entries. */
-TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
+RTE_TAILQ_HEAD(rte_swx_table_entry_list, rte_swx_table_entry);
 
 /**
  * Table memory footprint get
diff --git a/lib/table/rte_swx_table_selector.h b/lib/table/rte_swx_table_selector.h
index 71b6a74810..62988d2856 100644
--- a/lib/table/rte_swx_table_selector.h
+++ b/lib/table/rte_swx_table_selector.h
@@ -16,7 +16,6 @@ extern "C" {
  */
 
 #include <stdint.h>
-#include <sys/queue.h>
 
 #include <rte_compat.h>
 
@@ -56,7 +55,7 @@ struct rte_swx_table_selector_params {
 /** Group member parameters. */
 struct rte_swx_table_selector_member {
 	/** Linked list connectivity. */
-	TAILQ_ENTRY(rte_swx_table_selector_member) node;
+	RTE_TAILQ_ENTRY(rte_swx_table_selector_member) node;
 
 	/** Member ID. */
 	uint32_t member_id;
@@ -66,7 +65,7 @@ struct rte_swx_table_selector_member {
 };
 
 /** List of group members. */
-TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
+RTE_TAILQ_HEAD(rte_swx_table_selector_member_list, rte_swx_table_selector_member);
 
 /** Group parameters. */
 struct rte_swx_table_selector_group {
diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c
index e0b67721b6..e4a445e709 100644
--- a/lib/vhost/iotlb.c
+++ b/lib/vhost/iotlb.c
@@ -32,7 +32,7 @@ vhost_user_iotlb_pending_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_pending_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -100,7 +100,8 @@ vhost_user_iotlb_pending_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_pending_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_pending_list, next,
+				temp_node) {
 		if (node->iova < iova)
 			continue;
 		if (node->iova >= iova + size)
@@ -121,7 +122,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq)
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		TAILQ_REMOVE(&vq->iotlb_list, node, next);
 		rte_mempool_put(vq->iotlb_pool, node);
 	}
@@ -141,7 +142,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq)
 
 	entry_idx = rte_rand() % vq->iotlb_cache_nr;
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		if (!entry_idx) {
 			TAILQ_REMOVE(&vq->iotlb_list, node, next);
 			rte_mempool_put(vq->iotlb_pool, node);
@@ -218,7 +219,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq,
 
 	rte_rwlock_write_lock(&vq->iotlb_lock);
 
-	TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
+	RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) {
 		/* Sorted list */
 		if (unlikely(iova + size < node->iova))
 			break;
diff --git a/lib/vhost/rte_vdpa_dev.h b/lib/vhost/rte_vdpa_dev.h
index bfada387b0..b0f494815f 100644
--- a/lib/vhost/rte_vdpa_dev.h
+++ b/lib/vhost/rte_vdpa_dev.h
@@ -71,7 +71,7 @@ struct rte_vdpa_dev_ops {
  * vdpa device structure includes device address and device operations.
  */
 struct rte_vdpa_device {
-	TAILQ_ENTRY(rte_vdpa_device) next;
+	RTE_TAILQ_ENTRY(rte_vdpa_device) next;
 	/** Generic device information */
 	struct rte_device *device;
 	/** vdpa device operations */
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index 99a926a772..6dd91859ac 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -115,7 +115,7 @@ rte_vdpa_unregister_device(struct rte_vdpa_device *dev)
 	int ret = -1;
 
 	rte_spinlock_lock(&vdpa_device_list_lock);
-	TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
+	RTE_TAILQ_FOREACH_SAFE(cur_dev, &vdpa_device_list, next, tmp_dev) {
 		if (dev != cur_dev)
 			continue;
 
-- 
2.30.2


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v6 1/2] ethdev: add an API to get device configuration info
  @ 2021-08-25 20:07  3%     ` Ferruh Yigit
  2021-08-26  6:00  0%       ` Ajit Khaparde
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-25 20:07 UTC (permalink / raw)
  To: Jie Wang, dev; +Cc: xiaoyun.li, andrew.rybchenko, thomas

On 8/24/2021 7:19 PM, Jie Wang wrote:
> This patch adds a new API "rte_eth_dev_conf_info_get()" to help testpmd get
> device configuration info.
> 
> Signed-off-by: Jie Wang <jie1x.wang@intel.com>
> ---
>  lib/ethdev/rte_ethdev.c | 27 +++++++++++++++++++++++++++
>  lib/ethdev/rte_ethdev.h | 26 ++++++++++++++++++++++++++
>  lib/ethdev/version.map  |  3 +++
>  3 files changed, 56 insertions(+)
> 
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 9d95cd11e1..74184099a1 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -3458,6 +3458,33 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
>  	return 0;
>  }
>  
> +int
> +rte_eth_dev_conf_info_get(uint16_t port_id,
> +				struct rte_eth_dev_conf_info *dev_conf_info)
> +{
> +	struct rte_eth_dev *dev;
> +
> +	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> +	dev = &rte_eth_devices[port_id];
> +
> +	if (dev_conf_info == NULL) {
> +		RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u config info to NULL\n",
> +			port_id);
> +		return -EINVAL;
> +	}
> +
> +	/*
> +	 * Init dev_conf_info before port_id check since caller does not have
> +	 * return status and does not know if get is successful or not.
> +	 */
> +	memset(dev_conf_info, 0, sizeof(struct rte_eth_dev_conf_info));
> +
> +	dev_conf_info->rx_offloads = dev->data->dev_conf.rxmode.offloads;
> +	dev_conf_info->tx_offloads = dev->data->dev_conf.txmode.offloads;
> +
> +	return 0;
> +}
> +
>  int
>  rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask,
>  				 uint32_t *ptypes, int num)
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index d2b27c351f..70a2db550f 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1587,6 +1587,15 @@ struct rte_eth_dev_info {
>  	void *reserved_ptrs[2];   /**< Reserved for future fields */
>  };
>  
> +/**
> + * Ethernet device configuration information structure.
> + * Used to retrieve information about configured device.
> + */
> +struct rte_eth_dev_conf_info {
> +	uint64_t rx_offloads; /**rxmode offloads */
> +	uint64_t tx_offloads; /**txmode offloads */
> +};

My concern is if we need to extend this struct later, when application wants to
get more current config from the dpdk layer, it will cause ABI break and will
need to wait next LTS.

And as this struct grow, it will be kind of duplication of the 'struct
rte_eth_conf'.

What do you think to reuse 'struct rte_eth_conf' in this API, to cover future needs?

> +
>  /**
>   * RX/TX queue states
>   */
> @@ -3058,6 +3067,23 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
>   */
>  int rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info);
>  
> +/**
> + * Retrieve the contextual information of an Ethernet device.
> + *
> + * @param port_id
> + *   The port identifier of the Ethernet device.
> + * @param dev_conf_info
> + *   A pointer to a structure of type *rte_eth_dev_conf_info* to be filled with
> + *   the contextual information of the Ethernet device.
> + * @return
> + *   - (0) if successful.
> + *   - (-ENOTSUP) if support for dev_infos_get() does not exist for the device.
> + *   - (-ENODEV) if *port_id* invalid.
> + *   - (-EINVAL) if bad parameter.
> + */
> +int rte_eth_dev_conf_info_get(uint16_t port_id,
> +				struct rte_eth_dev_conf_info *dev_conf_info);
> +
>  /**
>   * Retrieve the firmware version of a device.
>   *
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index 44d30b05ae..40539f99f9 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -249,6 +249,9 @@ EXPERIMENTAL {
>  	rte_mtr_meter_policy_delete;
>  	rte_mtr_meter_policy_update;
>  	rte_mtr_meter_policy_validate;
> +
> +	# added in 21.11
> +	rte_eth_dev_conf_info_get;
>  };
>  
>  INTERNAL {
> 


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v6 1/2] ethdev: add an API to get device configuration info
  2021-08-25 20:07  3%     ` Ferruh Yigit
@ 2021-08-26  6:00  0%       ` Ajit Khaparde
  0 siblings, 0 replies; 200+ results
From: Ajit Khaparde @ 2021-08-26  6:00 UTC (permalink / raw)
  To: Ferruh Yigit
  Cc: Jie Wang, dpdk-dev, Xiaoyun Li, Andrew Rybchenko, Thomas Monjalon

[-- Attachment #1: Type: text/plain, Size: 4427 bytes --]

On Wed, Aug 25, 2021 at 1:08 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 8/24/2021 7:19 PM, Jie Wang wrote:
> > This patch adds a new API "rte_eth_dev_conf_info_get()" to help testpmd get
> > device configuration info.
> >
> > Signed-off-by: Jie Wang <jie1x.wang@intel.com>
> > ---
> >  lib/ethdev/rte_ethdev.c | 27 +++++++++++++++++++++++++++
> >  lib/ethdev/rte_ethdev.h | 26 ++++++++++++++++++++++++++
> >  lib/ethdev/version.map  |  3 +++
> >  3 files changed, 56 insertions(+)
> >
> > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> > index 9d95cd11e1..74184099a1 100644
> > --- a/lib/ethdev/rte_ethdev.c
> > +++ b/lib/ethdev/rte_ethdev.c
> > @@ -3458,6 +3458,33 @@ rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info)
> >       return 0;
> >  }
> >
> > +int
> > +rte_eth_dev_conf_info_get(uint16_t port_id,
> > +                             struct rte_eth_dev_conf_info *dev_conf_info)
> > +{
> > +     struct rte_eth_dev *dev;
> > +
> > +     RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
> > +     dev = &rte_eth_devices[port_id];
> > +
> > +     if (dev_conf_info == NULL) {
> > +             RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u config info to NULL\n",
> > +                     port_id);
> > +             return -EINVAL;
> > +     }
> > +
> > +     /*
> > +      * Init dev_conf_info before port_id check since caller does not have
> > +      * return status and does not know if get is successful or not.
> > +      */
> > +     memset(dev_conf_info, 0, sizeof(struct rte_eth_dev_conf_info));
> > +
> > +     dev_conf_info->rx_offloads = dev->data->dev_conf.rxmode.offloads;
> > +     dev_conf_info->tx_offloads = dev->data->dev_conf.txmode.offloads;
> > +
> > +     return 0;
> > +}
> > +
> >  int
> >  rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask,
> >                                uint32_t *ptypes, int num)
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > index d2b27c351f..70a2db550f 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1587,6 +1587,15 @@ struct rte_eth_dev_info {
> >       void *reserved_ptrs[2];   /**< Reserved for future fields */
> >  };
> >
> > +/**
> > + * Ethernet device configuration information structure.
> > + * Used to retrieve information about configured device.
> > + */
> > +struct rte_eth_dev_conf_info {
> > +     uint64_t rx_offloads; /**rxmode offloads */
> > +     uint64_t tx_offloads; /**txmode offloads */
> > +};
>
> My concern is if we need to extend this struct later, when application wants to
> get more current config from the dpdk layer, it will cause ABI break and will
> need to wait next LTS.
>
> And as this struct grow, it will be kind of duplication of the 'struct
> rte_eth_conf'.
>
> What do you think to reuse 'struct rte_eth_conf' in this API, to cover future needs?
+1
rte_eth_conf gives all the information needed and more for future enhancements!

>
> > +
> >  /**
> >   * RX/TX queue states
> >   */
> > @@ -3058,6 +3067,23 @@ int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr);
> >   */
> >  int rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info);
> >
> > +/**
> > + * Retrieve the contextual information of an Ethernet device.
> > + *
> > + * @param port_id
> > + *   The port identifier of the Ethernet device.
> > + * @param dev_conf_info
> > + *   A pointer to a structure of type *rte_eth_dev_conf_info* to be filled with
> > + *   the contextual information of the Ethernet device.
> > + * @return
> > + *   - (0) if successful.
> > + *   - (-ENOTSUP) if support for dev_infos_get() does not exist for the device.
> > + *   - (-ENODEV) if *port_id* invalid.
> > + *   - (-EINVAL) if bad parameter.
> > + */
> > +int rte_eth_dev_conf_info_get(uint16_t port_id,
> > +                             struct rte_eth_dev_conf_info *dev_conf_info);
> > +
> >  /**
> >   * Retrieve the firmware version of a device.
> >   *
> > diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> > index 44d30b05ae..40539f99f9 100644
> > --- a/lib/ethdev/version.map
> > +++ b/lib/ethdev/version.map
> > @@ -249,6 +249,9 @@ EXPERIMENTAL {
> >       rte_mtr_meter_policy_delete;
> >       rte_mtr_meter_policy_update;
> >       rte_mtr_meter_policy_validate;
> > +
> > +     # added in 21.11
> > +     rte_eth_dev_conf_info_get;
> >  };
> >
> >  INTERNAL {
> >
>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping
  @ 2021-08-26  9:46  3%     ` Burakov, Anatoly
  2021-08-26 10:09  3%       ` Bruce Richardson
  0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-08-26  9:46 UTC (permalink / raw)
  To: Ferruh Yigit, Xuan Ding, dev, maxime.coquelin, chenbo.xia
  Cc: jiayu.hu, bruce.richardson, techboard, David Marchand

On 26-Aug-21 10:29 AM, Ferruh Yigit wrote:
> On 8/25/2021 12:47 PM, Burakov, Anatoly wrote:
>> On 25-Aug-21 12:27 PM, Xuan Ding wrote:
>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>> purposes of saving space in the internal list of mappings. This has a
>>> side effect of compacting two separate mappings that just happen to be
>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>> not allow partial unmapping of memory mapped for DMA, the current DPDK
>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>> maps even though it could have been unmapped [1].
>>>
>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>> also include page size, and always map memory page-by-page.
>>>
>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>
>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>> ---
>>>    doc/guides/rel_notes/deprecation.rst | 3 +++
>>>    1 file changed, 3 insertions(+)
>>>
>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>> b/doc/guides/rel_notes/deprecation.rst
>>> index 76a4abfd6b..272ffa993e 100644
>>> --- a/doc/guides/rel_notes/deprecation.rst
>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>      reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
>>>      information from the crypto/security operation. This field will be used to
>>>      communicate events such as soft expiry with IPsec in lookaside mode.
>>> +
>>> +  * vfio: the functions `rte_vfio_container_dma_map` and
>>> `rte_vfio_container_dma_unmap`
>>> +  will be amended to include page size. This change is targeted for DPDK 21.11.
>>>
>>
>> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>
> 
> Techboard decision was to add a new API, instead of updating existing ones, to
> not break the apps using this API.
> 
> @Xuan, @Anatoly, can you please confirm if this will solve your problem?
> 

I don't think adding a new API is a particularly good solution. The 
"new" API will be almost exactly as the old one, but adding one 
parameter. I don't expect code duplication to be an issue, but having 
two API's that do the same thing seems like it's rife for potential 
confusion.

If we add a new API, we can then either remove the old API entirely in 
22.11 (effectively renaming it), or we remove the new API in 22.11 and 
rename it back to the old function name. I don't think neither of these 
is a good solution, as we risk introducing more users for the API that 
will later change.

I think the pain of updating current software for 21.11 (while keeping 
compatibility with 20.11 ABI!) is going to happen regardless, and 
whether we decide to add a "temporary" new API or permanently rename the 
old one. It's (in my opinion) easier to just bite the bullet and update 
the function in 21.11.

However, if the tech board feels like adding a new API is a good 
solution, then okay, but we need to flesh out roadmap a bit better. Do 
we rename the old API, or do we add a temporary new API?

-- 
Thanks,
Anatoly

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping
  2021-08-26  9:46  3%     ` Burakov, Anatoly
@ 2021-08-26 10:09  3%       ` Bruce Richardson
  2021-08-26 10:14  0%         ` Burakov, Anatoly
  0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2021-08-26 10:09 UTC (permalink / raw)
  To: Burakov, Anatoly
  Cc: Ferruh Yigit, Xuan Ding, dev, maxime.coquelin, chenbo.xia,
	jiayu.hu, techboard, David Marchand

On Thu, Aug 26, 2021 at 10:46:07AM +0100, Burakov, Anatoly wrote:
> On 26-Aug-21 10:29 AM, Ferruh Yigit wrote:
> > On 8/25/2021 12:47 PM, Burakov, Anatoly wrote:
> > > On 25-Aug-21 12:27 PM, Xuan Ding wrote:
> > > > Currently, the VFIO subsystem will compact adjacent DMA regions for the
> > > > purposes of saving space in the internal list of mappings. This has a
> > > > side effect of compacting two separate mappings that just happen to be
> > > > adjacent in memory. Since VFIO implementation on IA platforms also does
> > > > not allow partial unmapping of memory mapped for DMA, the current DPDK
> > > > VFIO implementation will prevent unmapping of accidentally adjacent
> > > > maps even though it could have been unmapped [1].
> > > > 
> > > > The proper fix for this issue is to change the VFIO DMA mapping API to
> > > > also include page size, and always map memory page-by-page.
> > > > 
> > > > [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
> > > > 
> > > > Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> > > > ---
> > > >    doc/guides/rel_notes/deprecation.rst | 3 +++
> > > >    1 file changed, 3 insertions(+)
> > > > 
> > > > diff --git a/doc/guides/rel_notes/deprecation.rst
> > > > b/doc/guides/rel_notes/deprecation.rst
> > > > index 76a4abfd6b..272ffa993e 100644
> > > > --- a/doc/guides/rel_notes/deprecation.rst
> > > > +++ b/doc/guides/rel_notes/deprecation.rst
> > > > @@ -287,3 +287,6 @@ Deprecation Notices
> > > >      reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
> > > >      information from the crypto/security operation. This field will be used to
> > > >      communicate events such as soft expiry with IPsec in lookaside mode.
> > > > +
> > > > +  * vfio: the functions `rte_vfio_container_dma_map` and
> > > > `rte_vfio_container_dma_unmap`
> > > > +  will be amended to include page size. This change is targeted for DPDK 21.11.
> > > > 
> > > 
> > > Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
> > > 
> > 
> > Techboard decision was to add a new API, instead of updating existing ones, to
> > not break the apps using this API.
> > 
> > @Xuan, @Anatoly, can you please confirm if this will solve your problem?
> > 
> 
> I don't think adding a new API is a particularly good solution. The "new"
> API will be almost exactly as the old one, but adding one parameter. I don't
> expect code duplication to be an issue, but having two API's that do the
> same thing seems like it's rife for potential confusion.
> 
Well, if one API is marked as deprecated, then there will be no confusion
for users, since using the wrong one will give a warning pointing to the
right one.

> If we add a new API, we can then either remove the old API entirely in
> 22.11 (effectively renaming it), or we remove the new API in 22.11 and
> rename it back to the old function name. I don't think neither of these
> is a good solution, as we risk introducing more users for the API that
> will later change.
The new API will not be renamed to the old one, since that would break apps
using it without proper deprecation process. Removing the old one alone
would be the approach to be used, but it would be correctly following the
deprecation process and giving users at least 1 year, if no 2, of notice
about the change.

> 
> I think the pain of updating current software for 21.11 (while keeping
> compatibility with 20.11 ABI!) is going to happen regardless, and whether we
> decide to add a "temporary" new API or permanently rename the old one. It's
> (in my opinion) easier to just bite the bullet and update the function in
> 21.11.
I fail to see the issue with adding a new function. Whether we add a new
function or add a parameter to the existing one, code will have to change
either way. The advantage of the former scheme, adding the new function, is
that it shows that we are serious about our ABI/API compatibility process,
and are not lax about passing exceptions when other options are available.

> 
> However, if the tech board feels like adding a new API is a good solution,
> then okay, but we need to flesh out roadmap a bit better. Do we rename the
> old API, or do we add a temporary new API?

New API added, old API deprecated. In future old API goes away leaving new
API as the only option.

/Bruce

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [PATCH v8] doc: add release milestones definition
  @ 2021-08-26 10:11  5% ` Thomas Monjalon
  0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2021-08-26 10:11 UTC (permalink / raw)
  To: dev
  Cc: david.marchand, ferruh.yigit, Asaf Penso, John McNamara,
	Ajit Khaparde, Bruce Richardson

From: Asaf Penso <asafp@nvidia.com>

Adding more information about the release milestones.
This includes the scope of change, expectations, etc.

Signed-off-by: Asaf Penso <asafp@nvidia.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: John McNamara <john.mcnamara@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
v2: fix styling format and add content in the commit message
v3: change punctuation and avoid plural form when unneeded
v4: avoid abbreviations, "Priority" in -rc, and reword as John suggests
v5: note that release candidates may vary
v6: merge RFC and proposal deadline, add roadmap link and reduce duplication
v7: make expectations clearer and stricter
v8: add tests, more fixes, maintainers approval and new API rules
---
 doc/guides/contributing/patches.rst | 85 +++++++++++++++++++++++++++--
 1 file changed, 80 insertions(+), 5 deletions(-)

diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index b9cc6e67ae..ef784d0f99 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -164,6 +164,10 @@ Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines
   the :doc:`ABI policy <abi_policy>` and :ref:`ABI versioning <abi_versioning>`
   guides. New external functions should also be added in alphabetical order.
 
+* Any new API function should be used in ``/app`` test directory.
+
+* When introducing a new device API, at least one driver should implement it.
+
 * Important changes will require an addition to the release notes in ``doc/guides/rel_notes/``.
   See the :ref:`Release Notes section of the Documentation Guidelines <doc_guidelines>` for details.
 
@@ -177,6 +181,8 @@ Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines
 * Add documentation, if relevant, in the form of Doxygen comments or a User Guide in RST format.
   See the :ref:`Documentation Guidelines <doc_guidelines>`.
 
+* Code and related documentation must be updated atomically in the same patch.
+
 Once the changes have been made you should commit them to your local repo.
 
 For small changes, that do not require specific explanations, it is better to keep things together in the
@@ -185,11 +191,6 @@ Larger changes that require different explanations should be separated into logi
 A good way of thinking about whether a patch should be split is to consider whether the change could be
 applied without dependencies as a backport.
 
-It is better to keep the related documentation changes in the same patch
-file as the code, rather than one big documentation patch at the end of a
-patchset. This makes it easier for future maintenance and development of the
-code.
-
 As a guide to how patches should be structured run ``git log`` on similar files.
 
 
@@ -663,3 +664,77 @@ patch accepted. The general cycle for patch review and acceptance is:
      than rework of the original.
    * Trivial patches may be merged sooner than described above at the tree committer's
      discretion.
+
+
+Milestones definition
+---------------------
+
+Each DPDK release has milestones that help everyone to converge to the release date.
+The following is a list of these milestones together with
+concrete definitions and expectations for a typical release cycle.
+An average cycle lasts 3 months and have 4 release candidates in the last month.
+Test reports are expected to be received after each release candidate.
+The number and expectations of release candidates might vary slightly.
+The schedule is updated in the `roadmap <https://core.dpdk.org/roadmap/#dates>`_.
+
+.. note::
+   Sooner is always better. Deadlines are not ideal dates.
+
+   Integration is never guaranteed but everyone can help.
+
+Roadmap
+~~~~~~~
+
+* Announce new features in libraries, drivers, applications, and examples.
+* To be published before the first day of the release cycle.
+
+Proposal Deadline
+~~~~~~~~~~~~~~~~~
+
+* Must send an RFC (Request For Comments) or a complete patch of new features.
+* Early RFC gives time for design review before complete implementation.
+* Should include at least the API changes in libraries and applications.
+* Library code should be quite complete at the deadline.
+* Nice to have: driver implementation, example code, and documentation.
+
+rc1
+~~~
+
+* Priority: libraries. No library feature should be accepted after -rc1.
+* API changes or additions must be implemented in libraries.
+* The API must include Doxygen documentation
+  and be part of the relevant .rst files (library-specific and release notes).
+* API should be used in a test application (``/app``).
+* At least one PMD should implement the API.
+  It may be a draft sent in a separate series.
+* The above should be sent to the mailing list at least 2 weeks before -rc1
+  to give time for review and maintainers approval.
+* If no review after 10 days, a reminder should be sent.
+* Nice to have: example code (``/examples``)
+
+rc2
+~~~
+
+* Priority: drivers. No driver feature should be accepted after -rc2.
+* A driver change must include documentation
+  in the relevant .rst files (driver-specific and release notes).
+* The above should be sent to the mailing list at least 2 weeks before -rc2.
+* Any issue found in -rc1 should be fixed.
+
+rc3
+~~~
+
+* Priority: applications. No application feature should be accepted after -rc3.
+* New functionality that does not depend on libraries update
+  can be integrated as part of -rc3.
+* The application change must include documentation in the relevant .rst files
+  (application-specific and release notes if significant).
+* Libraries and drivers cleanup are allowed.
+* Small driver reworks.
+* Critical and minor bug fixes.
+
+rc4
+~~~
+
+* Documentation updates.
+* Critical bug fixes.
-- 
2.31.1


^ permalink raw reply	[relevance 5%]

* Re: [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping
  2021-08-26 10:09  3%       ` Bruce Richardson
@ 2021-08-26 10:14  0%         ` Burakov, Anatoly
  2021-08-31 13:42  0%           ` Ding, Xuan
  0 siblings, 1 reply; 200+ results
From: Burakov, Anatoly @ 2021-08-26 10:14 UTC (permalink / raw)
  To: Bruce Richardson
  Cc: Ferruh Yigit, Xuan Ding, dev, maxime.coquelin, chenbo.xia,
	jiayu.hu, techboard, David Marchand

On 26-Aug-21 11:09 AM, Bruce Richardson wrote:
> On Thu, Aug 26, 2021 at 10:46:07AM +0100, Burakov, Anatoly wrote:
>> On 26-Aug-21 10:29 AM, Ferruh Yigit wrote:
>>> On 8/25/2021 12:47 PM, Burakov, Anatoly wrote:
>>>> On 25-Aug-21 12:27 PM, Xuan Ding wrote:
>>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for the
>>>>> purposes of saving space in the internal list of mappings. This has a
>>>>> side effect of compacting two separate mappings that just happen to be
>>>>> adjacent in memory. Since VFIO implementation on IA platforms also does
>>>>> not allow partial unmapping of memory mapped for DMA, the current DPDK
>>>>> VFIO implementation will prevent unmapping of accidentally adjacent
>>>>> maps even though it could have been unmapped [1].
>>>>>
>>>>> The proper fix for this issue is to change the VFIO DMA mapping API to
>>>>> also include page size, and always map memory page-by-page.
>>>>>
>>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
>>>>>
>>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
>>>>> ---
>>>>>     doc/guides/rel_notes/deprecation.rst | 3 +++
>>>>>     1 file changed, 3 insertions(+)
>>>>>
>>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
>>>>> b/doc/guides/rel_notes/deprecation.rst
>>>>> index 76a4abfd6b..272ffa993e 100644
>>>>> --- a/doc/guides/rel_notes/deprecation.rst
>>>>> +++ b/doc/guides/rel_notes/deprecation.rst
>>>>> @@ -287,3 +287,6 @@ Deprecation Notices
>>>>>       reserved bytes to 2 (from 3), and use 1 byte to indicate warnings and other
>>>>>       information from the crypto/security operation. This field will be used to
>>>>>       communicate events such as soft expiry with IPsec in lookaside mode.
>>>>> +
>>>>> +  * vfio: the functions `rte_vfio_container_dma_map` and
>>>>> `rte_vfio_container_dma_unmap`
>>>>> +  will be amended to include page size. This change is targeted for DPDK 21.11.
>>>>>
>>>>
>>>> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
>>>>
>>>
>>> Techboard decision was to add a new API, instead of updating existing ones, to
>>> not break the apps using this API.
>>>
>>> @Xuan, @Anatoly, can you please confirm if this will solve your problem?
>>>
>>
>> I don't think adding a new API is a particularly good solution. The "new"
>> API will be almost exactly as the old one, but adding one parameter. I don't
>> expect code duplication to be an issue, but having two API's that do the
>> same thing seems like it's rife for potential confusion.
>>
> Well, if one API is marked as deprecated, then there will be no confusion
> for users, since using the wrong one will give a warning pointing to the
> right one.
> 
>> If we add a new API, we can then either remove the old API entirely in
>> 22.11 (effectively renaming it), or we remove the new API in 22.11 and
>> rename it back to the old function name. I don't think neither of these
>> is a good solution, as we risk introducing more users for the API that
>> will later change.
> The new API will not be renamed to the old one, since that would break apps
> using it without proper deprecation process. Removing the old one alone
> would be the approach to be used, but it would be correctly following the
> deprecation process and giving users at least 1 year, if no 2, of notice
> about the change.
> 
>>
>> I think the pain of updating current software for 21.11 (while keeping
>> compatibility with 20.11 ABI!) is going to happen regardless, and whether we
>> decide to add a "temporary" new API or permanently rename the old one. It's
>> (in my opinion) easier to just bite the bullet and update the function in
>> 21.11.
> I fail to see the issue with adding a new function. Whether we add a new
> function or add a parameter to the existing one, code will have to change
> either way. The advantage of the former scheme, adding the new function, is
> that it shows that we are serious about our ABI/API compatibility process,
> and are not lax about passing exceptions when other options are available.
> 
>>
>> However, if the tech board feels like adding a new API is a good solution,
>> then okay, but we need to flesh out roadmap a bit better. Do we rename the
>> old API, or do we add a temporary new API?
> 
> New API added, old API deprecated. In future old API goes away leaving new
> API as the only option.
> 
> /Bruce
> 

Okay, so it's settled then. I revoke my ack for this patch, and we need 
a new deprecation notice.

-- 
Thanks,
Anatoly

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH] doc: announce library refactor for ABI improvement
@ 2021-08-26 10:35 15% Ferruh Yigit
  2021-08-26 10:46  4% ` [dpdk-dev] [EXT] " Akhil Goyal
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-26 10:35 UTC (permalink / raw)
  To: Ray Kinsella
  Cc: Ferruh Yigit, dev, Konstantin Ananyev, Jerin Jacob Kollanukkaran,
	Akhil Goyal

Target is to reduce the public interface surface to improve the ABI
stability and this is preparation for the longer term stable ABI
support.

Mainly device abstraction layer libraries are impacted because they have
two interfaces, one is public interface to the applications and other is
internal interface to the drivers. Some driver/internal interface
structures/symbols are in the public interface by mistake, this work is
to clean them.
Also some libraries has 'static inline' functions for performance
reasons (like ones in the ethdev), this work plans to split the
structures and hide the part that is not used by inline functions.

The need of the work for the stable ABI already discussed and planned by
the DPDK technical board:
https://mails.dpdk.org/archives/dev/2021-July/214662.html

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
Cc: Konstantin Ananyev <konstantin.ananyev@intel.com>
Cc: Jerin Jacob Kollanukkaran <jerinj@marvell.com>
Cc: Akhil Goyal <gakhil@marvell.com>
CC: Ray Kinsella <mdr@ashroe.eu>
---
 doc/guides/rel_notes/deprecation.rst | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b0b..1f02d9e14501 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -63,6 +63,11 @@ Deprecation Notices
   us extending existing enum/define.
   One solution can be using a fixed size array instead of ``.*MAX.*`` value.
 
+* lib: Will hide internal data structures and symbols from the public interfaces
+  as much as possible in v21.11.
+  This ABI break is done to improve the ABI stability in the long term and will
+  be done mainly, but not limited to, in device abstraction layer libraries.
+
 * ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
   Macros will be added for backward compatibility.
   Backward compatibility macros will be removed on v22.11.
-- 
2.31.1


^ permalink raw reply	[relevance 15%]

* Re: [dpdk-dev] [EXT] [PATCH] doc: announce library refactor for ABI improvement
  2021-08-26 10:35 15% [dpdk-dev] [PATCH] doc: announce library refactor for ABI improvement Ferruh Yigit
@ 2021-08-26 10:46  4% ` Akhil Goyal
  2021-08-26 10:47  4%   ` Jerin Jacob
  2021-08-26 11:04  4%   ` Bruce Richardson
  0 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2021-08-26 10:46 UTC (permalink / raw)
  To: Ferruh Yigit, Ray Kinsella
  Cc: dev, Konstantin Ananyev, Jerin Jacob Kollanukkaran

> Target is to reduce the public interface surface to improve the ABI
> stability and this is preparation for the longer term stable ABI
> support.
> 
> Mainly device abstraction layer libraries are impacted because they have
> two interfaces, one is public interface to the applications and other is
> internal interface to the drivers. Some driver/internal interface
> structures/symbols are in the public interface by mistake, this work is
> to clean them.
> Also some libraries has 'static inline' functions for performance
> reasons (like ones in the ethdev), this work plans to split the
> structures and hide the part that is not used by inline functions.
> 
> The need of the work for the stable ABI already discussed and planned by
> the DPDK technical board:
> https://mails.dpdk.org/archives/dev/2021-July/214662.html
> 
> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> ---
Acked-by: Akhil Goyal <gakhil@marvell.com>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [EXT] [PATCH] doc: announce library refactor for ABI improvement
  2021-08-26 10:46  4% ` [dpdk-dev] [EXT] " Akhil Goyal
@ 2021-08-26 10:47  4%   ` Jerin Jacob
  2021-08-26 11:04  4%   ` Bruce Richardson
  1 sibling, 0 replies; 200+ results
From: Jerin Jacob @ 2021-08-26 10:47 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: Ferruh Yigit, Ray Kinsella, dev, Konstantin Ananyev,
	Jerin Jacob Kollanukkaran

On Thu, Aug 26, 2021 at 4:16 PM Akhil Goyal <gakhil@marvell.com> wrote:
>
> > Target is to reduce the public interface surface to improve the ABI
> > stability and this is preparation for the longer term stable ABI
> > support.
> >
> > Mainly device abstraction layer libraries are impacted because they have
> > two interfaces, one is public interface to the applications and other is
> > internal interface to the drivers. Some driver/internal interface
> > structures/symbols are in the public interface by mistake, this work is
> > to clean them.
> > Also some libraries has 'static inline' functions for performance
> > reasons (like ones in the ethdev), this work plans to split the
> > structures and hide the part that is not used by inline functions.
> >
> > The need of the work for the stable ABI already discussed and planned by
> > the DPDK technical board:
> > https://mails.dpdk.org/archives/dev/2021-July/214662.html
> >
> > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > ---
> Acked-by: Akhil Goyal <gakhil@marvell.com>

Acked-by: Jerin Jacob <jerinj@marvell.com>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [EXT] [PATCH] doc: announce library refactor for ABI improvement
  2021-08-26 10:46  4% ` [dpdk-dev] [EXT] " Akhil Goyal
  2021-08-26 10:47  4%   ` Jerin Jacob
@ 2021-08-26 11:04  4%   ` Bruce Richardson
  2021-08-26 15:44  4%     ` Andrew Rybchenko
  1 sibling, 1 reply; 200+ results
From: Bruce Richardson @ 2021-08-26 11:04 UTC (permalink / raw)
  To: Akhil Goyal
  Cc: Ferruh Yigit, Ray Kinsella, dev, Konstantin Ananyev,
	Jerin Jacob Kollanukkaran

On Thu, Aug 26, 2021 at 10:46:35AM +0000, Akhil Goyal wrote:
> > Target is to reduce the public interface surface to improve the ABI
> > stability and this is preparation for the longer term stable ABI
> > support.
> > 
> > Mainly device abstraction layer libraries are impacted because they have
> > two interfaces, one is public interface to the applications and other is
> > internal interface to the drivers. Some driver/internal interface
> > structures/symbols are in the public interface by mistake, this work is
> > to clean them.
> > Also some libraries has 'static inline' functions for performance
> > reasons (like ones in the ethdev), this work plans to split the
> > structures and hide the part that is not used by inline functions.
> > 
> > The need of the work for the stable ABI already discussed and planned by
> > the DPDK technical board:
> > https://mails.dpdk.org/archives/dev/2021-July/214662.html
> > 
> > Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > ---
> Acked-by: Akhil Goyal <gakhil@marvell.com>

Acked-by: Bruce Richardson <bruce.richardson@intel.com>

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue
  @ 2021-08-26 11:58  4%               ` Jerin Jacob
  2021-08-28 14:16  0%                 ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-08-26 11:58 UTC (permalink / raw)
  To: Xueming(Steven) Li
  Cc: dpdk-dev, Ferruh Yigit, NBU-Contact-Thomas Monjalon, Andrew Rybchenko

On Thu, Aug 19, 2021 at 5:39 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, August 19, 2021 1:27 PM
> > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> > Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> >
> > On Wed, Aug 18, 2021 at 4:44 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Tuesday, August 17, 2021 11:12 PM
> > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> > > > <andrew.rybchenko@oktetlabs.ru>
> > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > >
> > > > On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > Sent: Tuesday, August 17, 2021 5:33 PM
> > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > > >
> > > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > > > > > >
> > > > > > > In current DPDK framework, each RX queue is pre-loaded with
> > > > > > > mbufs for incoming packets. When number of representors scale
> > > > > > > out in a switch domain, the memory consumption became
> > > > > > > significant. Most important, polling all ports leads to high
> > > > > > > cache miss, high latency and low throughput.
> > > > > > >
> > > > > > > This patch introduces shared RX queue. Ports with same
> > > > > > > configuration in a switch domain could share RX queue set by specifying sharing group.
> > > > > > > Polling any queue using same shared RX queue receives packets
> > > > > > > from all member ports. Source port is identified by mbuf->port.
> > > > > > >
> > > > > > > Port queue number in a shared group should be identical. Queue
> > > > > > > index is
> > > > > > > 1:1 mapped in shared group.
> > > > > > >
> > > > > > > Share RX queue must be polled on single thread or core.
> > > > > > >
> > > > > > > Multiple groups is supported by group ID.
> > > > > > >
> > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > Cc: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > ---
> > > > > > > Rx queue object could be used as shared Rx queue object, it's
> > > > > > > important to clear all queue control callback api that using queue object:
> > > > > > >   https://mails.dpdk.org/archives/dev/2021-July/215574.html
> > > > > >
> > > > > > >  #undef RTE_RX_OFFLOAD_BIT2STR diff --git
> > > > > > > a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > > > > > d2b27c351f..a578c9db9d 100644
> > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > >         uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> > > > > > >         uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > > > > > >         uint16_t rx_nseg; /**< Number of descriptions in rx_seg array.
> > > > > > > */
> > > > > > > +       uint32_t shared_group; /**< Shared port group index in
> > > > > > > + switch domain. */
> > > > > >
> > > > > > Not to able to see anyone setting/creating this group ID test application.
> > > > > > How this group is created?
> > > > >
> > > > > Nice catch, the initial testpmd version only support one default group(0).
> > > > > All ports that supports shared-rxq assigned in same group.
> > > > >
> > > > > We should be able to change "--rxq-shared" to "--rxq-shared-group"
> > > > > to support group other than default.
> > > > >
> > > > > To support more groups simultaneously, need to consider testpmd
> > > > > forwarding stream core assignment, all streams in same group need to stay on same core.
> > > > > It's possible to specify how many ports to increase group number,
> > > > > but user must schedule stream affinity carefully - error prone.
> > > > >
> > > > > On the other hand, one group should be sufficient for most
> > > > > customer, the doubt is whether it valuable to support multiple groups test.
> > > >
> > > > Ack. One group is enough in testpmd.
> > > >
> > > > My question was more about who and how this group is created, Should
> > > > n't we need API to create shared_group? If we do the following, at least, I can think, how it can be implemented in SW or other HW.
> > > >
> > > > - Create aggregation queue group
> > > > - Attach multiple  Rx queues to the aggregation queue group
> > > > - Pull the packets from the queue group(which internally fetch from
> > > > the Rx queues _attached_)
> > > >
> > > > Does the above kind of sequence, break your representor use case?
> > >
> > > Seems more like a set of EAL wrapper. Current API tries to minimize the application efforts to adapt shared-rxq.
> > > - step 1, not sure how important it is to create group with API, in rte_flow, group is created on demand.
> >
> > Which rte_flow pattern/action for this?
>
> No rte_flow for this, just recalled that the group in rte_flow is not created along with flow, not via api.
> I don’t see anything else to create along with group, just double whether it valuable to introduce a new api set to manage group.

See below.

>
> >
> > > - step 2, currently, the attaching is done in rte_eth_rx_queue_setup, specify offload and group in rx_conf struct.
> > > - step 3, define a dedicate api to receive packets from shared rxq? Looks clear to receive packets from shared rxq.
> > >   currently, rxq objects in share group is same - the shared rxq, so the eth callback eth_rx_burst_t(rxq_obj, mbufs, n) could
> > >   be used to receive packets from any ports in group, normally the first port(PF) in group.
> > >   An alternative way is defining a vdev with same queue number and copy rxq objects will make the vdev a proxy of
> > >   the shared rxq group - this could be an helper API.
> > >
> > > Anyway the wrapper doesn't break use case, step 3 api is more clear, need to understand how to implement efficiently.
> >
> > Are you doing this feature based on any HW support or it just pure SW thing, If it is SW, It is better to have just new vdev for like
> > drivers/net/bonding/. This we can help aggregate multiple Rxq across the multiple ports of same the driver.
>
> Based on HW support.

In Marvel HW, we do some support, I will outline here and some queries on this.

# We need to create some new HW structure for aggregation
# Connect each Rxq to the new HW structure for aggregation
# Use rx_burst from the new HW structure.

Could you outline your HW support?

Also, I am not able to understand how this will reduce the memory,
atleast in our HW need creating more memory now to deal this
as we need to deal new HW structure.

How is in your HW it reduces the memory? Also, if memory is the
constraint, why NOT reduce the number of queues.

# Also, I was thinking, one way to avoid the fast path or ABI change would like.

# Driver Initializes one more eth_dev_ops in driver as aggregator ethdev
# devargs of new ethdev or specific API like
drivers/net/bonding/rte_eth_bond.h can take the argument (port, queue)
tuples which needs to aggregate by new ethdev port
# No change in fastpath or ABI is required in this model.



> Most user might uses PF in group as the anchor port to rx burst, current definition should be easy for them to migrate.
> but some user might prefer grouping some hot plug/unpluggedrepresentors, EAL could provide wrappers, users could do
> that either due to the strategy not complex enough. Anyway, welcome any suggestion.
>
> >
> >
> > >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > >         /**
> > > > > > >          * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > > > > > >          * Only offloads set on rx_queue_offload_capa or
> > > > > > > rx_offload_capa @@ -1373,6 +1374,12 @@ struct rte_eth_conf {
> > > > > > > #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
> > > > > > >  #define DEV_RX_OFFLOAD_RSS_HASH                0x00080000
> > > > > > >  #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> > > > > > > +/**
> > > > > > > + * Rx queue is shared among ports in same switch domain to
> > > > > > > +save memory,
> > > > > > > + * avoid polling each port. Any port in group can be used to receive packets.
> > > > > > > + * Real source port number saved in mbuf->port field.
> > > > > > > + */
> > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ   0x00200000
> > > > > > >
> > > > > > >  #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > >                                  DEV_RX_OFFLOAD_UDP_CKSUM | \
> > > > > > > --
> > > > > > > 2.25.1
> > > > > > >

^ permalink raw reply	[relevance 4%]

* Re: [dpdk-dev] [RFC 0/7] hide eth dev related structures
  2021-08-20 16:28  3% [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
  2021-08-20 16:28  2% ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
@ 2021-08-26 12:37  3% ` Jerin Jacob
  1 sibling, 0 replies; 200+ results
From: Jerin Jacob @ 2021-08-26 12:37 UTC (permalink / raw)
  To: Konstantin Ananyev
  Cc: dpdk-dev, Thomas Monjalon, Ferruh Yigit, Andrew Rybchenko,
	Qiming Yang, Qi Zhang, Beilei Xing, techboard

On Fri, Aug 20, 2021 at 9:59 PM Konstantin Ananyev
<konstantin.ananyev@intel.com> wrote:
>
> NOTE: This is just an RFC to start further discussion and collect the feedback.
> Due to significant amount of work, changes required are applied only to two
> PMDs so far: net/i40e and net/ice.
> So to build it you'll need to add:
> -Denable_drivers='common/*,mempool/*,net/ice,net/i40e'
> to your config options.

>
> That approach was selected to avoid(/minimize) possible performance losses.
>
> So far I done only limited amount functional and performance testing.
> Didn't spot any functional problems, and performance numbers
> remains the same before and after the patch on my box (testpmd, macswap fwd).


Based on testing on octeonxt2. We see some regression in testpmd and
bit on l3fwd too.

Without patch: 73.5mpps/core in testpmd iofwd
With out patch: 72 5mpps/core in testpmd iofwd

Based on my understanding it is due to additional indirection.

My suggestion to fix the problem by:
Removing the additional `data` redirection and pull callback function
pointers back
and keep rest as opaque as done in the existing patch like [1]

I don't believe this has any real implication on future ABI stability
as we will not be adding
any new item in rte_eth_fp in any way as new features can be added in slowpath
rte_eth_dev as mentioned in the patch.

[2] is the patch of doing the same as I don't see any performance
regression after [2].


[1]
- struct rte_eth_burst_api {
- struct rte_eth_fp {
+ void *data;
  rte_eth_rx_burst_t rx_pkt_burst;
  /**< PMD receive function. */
  rte_eth_tx_burst_t tx_pkt_burst;
@@ -85,8 +100,19 @@ struct rte_eth_burst_api {
  /**< Check the status of a Rx descriptor. */
  rte_eth_tx_descriptor_status_t tx_descriptor_status;
  /**< Check the status of a Tx descriptor. */
+ /**
+ * User-supplied functions called from rx_burst to post-process
+ * received packets before passing them to the user
+ */
+ struct rte_eth_rxtx_callback
+ *post_rx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
+ /**
+ * User-supplied functions called from tx_burst to pre-process
+ * received packets before passing them to the driver for transmission.
+ */
+ struct rte_eth_rxtx_callback *pre_tx_burst_cbs[RTE_MAX_QUEUES_PER_PORT];
  uintptr_t reserved[2];
-} __rte_cache_min_aligned;
+} __rte_cache_aligned;

[2]
https://pastebin.com/CuqkrCW4

^ permalink raw reply	[relevance 3%]

* [dpdk-dev] [RFC 0/7] make rte_intr_handle internal
@ 2021-08-26 14:57  4% Harman Kalra
  2021-08-26 14:57  1% ` [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
  0 siblings, 1 reply; 200+ results
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
  To: dev; +Cc: Harman Kalra

Moving struct rte_intr_handle as an internal structure to
avoid any ABI breakages in future. Since this structure defines
some static arrays and changing respective macros breaks the ABI.
Eg:
Currently RTE_MAX_RXTX_INTR_VEC_ID imposes a limit of maximum 512
MSI-X interrupts that can be defined for a PCI device, while PCI
specification allows maximum 2048 MSI-X interrupts that can be used.
If some PCI device requires more than 512 vectors, either change the
RTE_MAX_RXTX_INTR_VEC_ID limit or dynamically allocate based on
PCI device MSI-X size on probe time. Either way its an ABI breakage.

Change already included in 21.11 ABI improvement spreadsheet (item 42):
https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.google.com_s
preadsheets_d_1betlC000ua5SsSiJIcC54mCCCJnW6voH5Dqv9UxeyfE_edit-23gid-
3D0&d=DwICaQ&c=nKjWec2b6R0mOyPaz7xtfQ&r=5ESHPj7V-7JdkxT_Z_SU6RrS37ys4U
XudBQ_rrS5LRo&m=7dl3OmXU7QHMmWYB6V1hYJtq1cUkjfhXUwze2Si_48c&s=lh6DEGhR
Bg1shODpAy3RQk-H-0uQx5icRfUBf9dtCp4&e=


This series makes struct rte_intr_handle totally opaque to the outside
world by wrapping it inside a .c file and providing get set wrapper APIs
to read or manipulate its fields.. Any changes to be made to any of the
fields should be done via these get set APIs.
Introduced a new eal_common_interrupts.c where all these APIs are defined
and also hides struct rte_intr_handle definition.

Details on each patch of the series:
Patch 1: eal: interrupt handle API prototypes
This patch provides prototypes of all the new get set APIs, and
also rearranges the headers related to interrupt framework. Epoll
related definitions prototypes are moved into a new header i.e.
rte_epoll.h and APIs defined in rte_eal_interrupts.h which were
driver specific are moved to rte_interrupts.h (as anyways it was
accessible and used outside DPDK library. Later in the series
rte_eal_interrupts.h is removed.

Patch 2: eal/interrupts: implement get set APIs
Implementing all get, set and alloc APIs. Alloc APIs are implemented
to allocate memory for interrupt handle instance. Currently most of
the drivers defines interrupt handle instance as static but now it cant
be static as size of rte_intr_handle is unknown to all the drivers.
Drivers are expected to allocate interrupt instances during initialization
and free these instances during cleanup phase.

Patch 3: eal/interrupts: avoid direct access to interrupt handle
Modifying the interrupt framework for linux and freebsd to use these
get set alloc APIs as per requirement and avoid accessing the fields
directly.

Patch 4: test/interrupt: apply get set interrupt handle APIs
Updating interrupt test suite to use interrupt handle APIs.

Patch 5: drivers: remove direct access to interrupt handle fields
Modifying all the drivers and libraries which are currently directly
accessing the interrupt handle fields. Drivers are expected to
allocated the interrupt instance, use get set APIs with the allocated
interrupt handle and free it on cleanup.

Patch 6: eal/interrupts: make interrupt handle structure opaque
In this patch rte_eal_interrupt.h is removed, struct rte_intr_handle
definition is moved to c file to make it completely opaque. As part of
interrupt handle allocation, array like efds and elist(which are currently
static) are dynamically allocated with default size
(RTE_MAX_RXTX_INTR_VEC_ID). Later these arrays can be reallocated as per
device requirement using new API rte_intr_handle_event_list_update().
Eg, on PCI device probing MSIX size can be queried and these arrays can
be reallocated accordingly.

Patch 7: eal/alarm: introduce alarm fini routine
Introducing alarm fini routine, as the memory allocated for alarm interrupt
instance can be freed in alarm fini.

Testing performed:
1. Validated the series by running interrupts and alarm test suite.
2. Validate l3fwd power functionality with octeontx2 and i40e intel cards,
   where interrupts are expected on packet arrival.

Harman Kalra (7):
  eal: interrupt handle API prototypes
  eal/interrupts: implement get set APIs
  eal/interrupts: avoid direct access to interrupt handle
  test/interrupt: apply get set interrupt handle APIs
  drivers: remove direct access to interrupt handle fields
  eal/interrupts: make interrupt handle structure opaque
  eal/alarm: introduce alarm fini routine

 MAINTAINERS                                   |   1 +
 app/test/test_interrupts.c                    | 237 +++---
 drivers/baseband/acc100/rte_acc100_pmd.c      |  18 +-
 .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c         |  13 +-
 drivers/baseband/fpga_lte_fec/fpga_lte_fec.c  |  14 +-
 drivers/bus/auxiliary/auxiliary_common.c      |   2 +
 drivers/bus/auxiliary/linux/auxiliary.c       |  11 +
 drivers/bus/auxiliary/rte_bus_auxiliary.h     |   2 +-
 drivers/bus/dpaa/dpaa_bus.c                   |  28 +-
 drivers/bus/dpaa/rte_dpaa_bus.h               |   2 +-
 drivers/bus/fslmc/fslmc_bus.c                 |  17 +-
 drivers/bus/fslmc/fslmc_vfio.c                |  32 +-
 drivers/bus/fslmc/portal/dpaa2_hw_dpio.c      |  21 +-
 drivers/bus/fslmc/portal/dpaa2_hw_pvt.h       |   2 +-
 drivers/bus/fslmc/rte_fslmc.h                 |   2 +-
 drivers/bus/ifpga/ifpga_bus.c                 |  16 +-
 drivers/bus/ifpga/rte_bus_ifpga.h             |   2 +-
 drivers/bus/pci/linux/pci.c                   |   4 +-
 drivers/bus/pci/linux/pci_uio.c               |  73 +-
 drivers/bus/pci/linux/pci_vfio.c              | 115 ++-
 drivers/bus/pci/pci_common.c                  |  29 +-
 drivers/bus/pci/pci_common_uio.c              |  21 +-
 drivers/bus/pci/rte_bus_pci.h                 |   4 +-
 drivers/bus/vmbus/linux/vmbus_bus.c           |   7 +
 drivers/bus/vmbus/linux/vmbus_uio.c           |  37 +-
 drivers/bus/vmbus/rte_bus_vmbus.h             |   2 +-
 drivers/bus/vmbus/vmbus_common_uio.c          |  24 +-
 drivers/common/cnxk/roc_cpt.c                 |   8 +-
 drivers/common/cnxk/roc_dev.c                 |  14 +-
 drivers/common/cnxk/roc_irq.c                 | 106 +--
 drivers/common/cnxk/roc_nix_irq.c             |  37 +-
 drivers/common/cnxk/roc_npa.c                 |   2 +-
 drivers/common/cnxk/roc_platform.h            |  34 +
 drivers/common/cnxk/roc_sso.c                 |   4 +-
 drivers/common/cnxk/roc_tim.c                 |   4 +-
 drivers/common/octeontx2/otx2_dev.c           |  14 +-
 drivers/common/octeontx2/otx2_irq.c           | 117 +--
 .../octeontx2/otx2_cryptodev_hw_access.c      |   4 +-
 drivers/event/octeontx2/otx2_evdev_irq.c      |  12 +-
 drivers/mempool/octeontx2/otx2_mempool.c      |   2 +-
 drivers/net/atlantic/atl_ethdev.c             |  22 +-
 drivers/net/avp/avp_ethdev.c                  |   8 +-
 drivers/net/axgbe/axgbe_ethdev.c              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  10 +-
 drivers/net/bnxt/bnxt_ethdev.c                |  32 +-
 drivers/net/bnxt/bnxt_irq.c                   |   4 +-
 drivers/net/dpaa/dpaa_ethdev.c                |  47 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              |  10 +-
 drivers/net/e1000/em_ethdev.c                 |  24 +-
 drivers/net/e1000/igb_ethdev.c                |  84 ++-
 drivers/net/ena/ena_ethdev.c                  |  36 +-
 drivers/net/enic/enic_main.c                  |  27 +-
 drivers/net/failsafe/failsafe.c               |  24 +-
 drivers/net/failsafe/failsafe_intr.c          |  45 +-
 drivers/net/failsafe/failsafe_ops.c           |  23 +-
 drivers/net/failsafe/failsafe_private.h       |   2 +-
 drivers/net/fm10k/fm10k_ethdev.c              |  32 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          |  10 +-
 drivers/net/hns3/hns3_ethdev.c                |  50 +-
 drivers/net/hns3/hns3_ethdev_vf.c             |  57 +-
 drivers/net/hns3/hns3_rxtx.c                  |   2 +-
 drivers/net/i40e/i40e_ethdev.c                |  55 +-
 drivers/net/i40e/i40e_ethdev_vf.c             |  43 +-
 drivers/net/iavf/iavf_ethdev.c                |  41 +-
 drivers/net/iavf/iavf_vchnl.c                 |   4 +-
 drivers/net/ice/ice_dcf.c                     |  10 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  23 +-
 drivers/net/ice/ice_ethdev.c                  |  51 +-
 drivers/net/igc/igc_ethdev.c                  |  47 +-
 drivers/net/ionic/ionic_ethdev.c              |  12 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              |  70 +-
 drivers/net/memif/memif_socket.c              |  99 ++-
 drivers/net/memif/memif_socket.h              |   4 +-
 drivers/net/memif/rte_eth_memif.c             |  63 +-
 drivers/net/memif/rte_eth_memif.h             |   2 +-
 drivers/net/mlx4/mlx4.c                       |  20 +-
 drivers/net/mlx4/mlx4.h                       |   2 +-
 drivers/net/mlx4/mlx4_intr.c                  |  48 +-
 drivers/net/mlx5/linux/mlx5_os.c              |  56 +-
 drivers/net/mlx5/linux/mlx5_socket.c          |  26 +-
 drivers/net/mlx5/mlx5.h                       |   6 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  43 +-
 drivers/net/mlx5/mlx5_trigger.c               |   4 +-
 drivers/net/mlx5/mlx5_txpp.c                  |  27 +-
 drivers/net/netvsc/hn_ethdev.c                |   4 +-
 drivers/net/nfp/nfp_net.c                     |  42 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  31 +-
 drivers/net/octeontx2/otx2_ethdev_irq.c       |  35 +-
 drivers/net/qede/qede_ethdev.c                |  16 +-
 drivers/net/sfc/sfc_intr.c                    |  29 +-
 drivers/net/tap/rte_eth_tap.c                 |  37 +-
 drivers/net/tap/rte_eth_tap.h                 |   2 +-
 drivers/net/tap/tap_intr.c                    |  33 +-
 drivers/net/thunderx/nicvf_ethdev.c           |  13 +
 drivers/net/thunderx/nicvf_struct.h           |   2 +-
 drivers/net/txgbe/txgbe_ethdev.c              |  36 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  35 +-
 drivers/net/vhost/rte_eth_vhost.c             |  78 +-
 drivers/net/virtio/virtio_ethdev.c            |  17 +-
 .../net/virtio/virtio_user/virtio_user_dev.c  |  53 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  45 +-
 drivers/raw/ifpga/ifpga_rawdev.c              |  42 +-
 drivers/raw/ntb/ntb.c                         |  10 +-
 .../regex/octeontx2/otx2_regexdev_hw_access.c |   4 +-
 drivers/vdpa/ifc/ifcvf_vdpa.c                 |   5 +-
 drivers/vdpa/mlx5/mlx5_vdpa.c                 |  11 +
 drivers/vdpa/mlx5/mlx5_vdpa.h                 |   4 +-
 drivers/vdpa/mlx5/mlx5_vdpa_event.c           |  22 +-
 drivers/vdpa/mlx5/mlx5_vdpa_virtq.c           |  46 +-
 lib/bbdev/rte_bbdev.c                         |   4 +-
 lib/eal/common/eal_common_interrupts.c        | 668 +++++++++++++++++
 lib/eal/common/eal_private.h                  |  11 +
 lib/eal/common/meson.build                    |   2 +
 lib/eal/freebsd/eal.c                         |   1 +
 lib/eal/freebsd/eal_alarm.c                   |  57 +-
 lib/eal/freebsd/eal_interrupts.c              |  93 ++-
 lib/eal/include/meson.build                   |   2 +-
 lib/eal/include/rte_eal_interrupts.h          | 269 -------
 lib/eal/include/rte_eal_trace.h               |  24 +-
 lib/eal/include/rte_epoll.h                   | 117 +++
 lib/eal/include/rte_interrupts.h              | 673 +++++++++++++++++-
 lib/eal/linux/eal.c                           |   1 +
 lib/eal/linux/eal_alarm.c                     |  39 +-
 lib/eal/linux/eal_dev.c                       |  65 +-
 lib/eal/linux/eal_interrupts.c                | 294 +++++---
 lib/eal/version.map                           |  30 +
 lib/ethdev/ethdev_pci.h                       |   2 +-
 lib/ethdev/rte_ethdev.c                       |  14 +-
 129 files changed, 3763 insertions(+), 1672 deletions(-)
 create mode 100644 lib/eal/common/eal_common_interrupts.c
 delete mode 100644 lib/eal/include/rte_eal_interrupts.h
 create mode 100644 lib/eal/include/rte_epoll.h

-- 
2.18.0


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle
  2021-08-26 14:57  4% [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
@ 2021-08-26 14:57  1% ` Harman Kalra
  0 siblings, 0 replies; 200+ results
From: Harman Kalra @ 2021-08-26 14:57 UTC (permalink / raw)
  To: dev, Harman Kalra, Bruce Richardson

Making changes to the interrupt framework to use interrupt handle
APIs to get/set any field. Direct access to any of the fields
should be avoided to avoid any ABI breakage in future.

Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
 lib/eal/freebsd/eal_interrupts.c |  93 ++++++----
 lib/eal/linux/eal_interrupts.c   | 294 +++++++++++++++++++------------
 2 files changed, 241 insertions(+), 146 deletions(-)

diff --git a/lib/eal/freebsd/eal_interrupts.c b/lib/eal/freebsd/eal_interrupts.c
index 86810845fe..5724948d81 100644
--- a/lib/eal/freebsd/eal_interrupts.c
+++ b/lib/eal/freebsd/eal_interrupts.c
@@ -40,7 +40,7 @@ struct rte_intr_callback {
 
 struct rte_intr_source {
 	TAILQ_ENTRY(rte_intr_source) next;
-	struct rte_intr_handle intr_handle; /**< interrupt handle */
+	struct rte_intr_handle *intr_handle; /**< interrupt handle */
 	struct rte_intr_cb_list callbacks;  /**< user callbacks */
 	uint32_t active;
 };
@@ -60,7 +60,7 @@ static int
 intr_source_to_kevent(const struct rte_intr_handle *ih, struct kevent *ke)
 {
 	/* alarm callbacks are special case */
-	if (ih->type == RTE_INTR_HANDLE_ALARM) {
+	if (rte_intr_handle_type_get(ih) == RTE_INTR_HANDLE_ALARM) {
 		uint64_t timeout_ns;
 
 		/* get soonest alarm timeout */
@@ -89,7 +89,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	int ret = 0, add_event = 0;
 
 	/* first do parameter checking */
-	if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+	if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+	    cb == NULL) {
 		RTE_LOG(ERR, EAL,
 			"Registering with invalid input parameter\n");
 		return -EINVAL;
@@ -103,7 +104,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 
 	/* find the source for this intr_handle */
 	TAILQ_FOREACH(src, &intr_sources, next) {
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_handle_fd_get(src->intr_handle) ==
+		    rte_intr_handle_fd_get(intr_handle))
 			break;
 	}
 
@@ -112,8 +114,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	 * thing on the list should be eal_alarm_callback() and we may
 	 * be called just to reset the timer.
 	 */
-	if (src != NULL && src->intr_handle.type == RTE_INTR_HANDLE_ALARM &&
-		 !TAILQ_EMPTY(&src->callbacks)) {
+	if (src != NULL && rte_intr_handle_type_get(src->intr_handle) ==
+		RTE_INTR_HANDLE_ALARM && !TAILQ_EMPTY(&src->callbacks)) {
 		callback = NULL;
 	} else {
 		/* allocate a new interrupt callback entity */
@@ -135,10 +137,20 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 				ret = -ENOMEM;
 				goto fail;
 			} else {
-				src->intr_handle = *intr_handle;
-				TAILQ_INIT(&src->callbacks);
-				TAILQ_INSERT_TAIL(&intr_sources, src, next);
-			}
+				src->intr_handle =
+					rte_intr_handle_instance_alloc(
+					RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+				if (src->intr_handle == NULL) {
+					RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+					free(callback);
+					ret = -ENOMEM;
+				} else {
+					rte_intr_handle_instance_index_set(
+					      src->intr_handle, intr_handle, 0);
+					TAILQ_INIT(&src->callbacks);
+					TAILQ_INSERT_TAIL(&intr_sources, src,
+							  next);
+				}
 		}
 
 		/* we had no interrupts for this */
@@ -151,7 +163,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	/* add events to the queue. timer events are special as we need to
 	 * re-set the timer.
 	 */
-	if (add_event || src->intr_handle.type == RTE_INTR_HANDLE_ALARM) {
+	if (add_event || rte_intr_handle_type_get(src->intr_handle) ==
+							RTE_INTR_HANDLE_ALARM) {
 		struct kevent ke;
 
 		memset(&ke, 0, sizeof(ke));
@@ -173,12 +186,13 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			 */
 			if (errno == ENODEV)
 				RTE_LOG(DEBUG, EAL, "Interrupt handle %d not supported\n",
-					src->intr_handle.fd);
+				rte_intr_handle_fd_get(src->intr_handle));
 			else
 				RTE_LOG(ERR, EAL, "Error adding fd %d "
-						"kevent, %s\n",
-						src->intr_handle.fd,
-						strerror(errno));
+					"kevent, %s\n",
+					rte_intr_handle_fd_get(
+							src->intr_handle),
+					strerror(errno));
 			ret = -errno;
 			goto fail;
 		}
@@ -213,7 +227,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -228,7 +242,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_handle_fd_get(src->intr_handle) ==
+					rte_intr_handle_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -268,7 +283,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -282,7 +297,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_handle_fd_get(src->intr_handle) ==
+					rte_intr_handle_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -314,7 +330,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		 */
 		if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
 			RTE_LOG(ERR, EAL, "Error removing fd %d kevent, %s\n",
-				src->intr_handle.fd, strerror(errno));
+				rte_intr_handle_fd_get(src->intr_handle),
+				strerror(errno));
 			/* removing non-existent even is an expected condition
 			 * in some circumstances (e.g. oneshot events).
 			 */
@@ -365,17 +382,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+				rte_intr_handle_dev_fd_get(intr_handle) < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type) {
+	switch (rte_intr_handle_type_get(intr_handle)) {
 	/* not used at this moment */
 	case RTE_INTR_HANDLE_ALARM:
 		rc = -1;
@@ -388,7 +406,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -406,17 +424,18 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	if (rte_intr_handle_fd_get(intr_handle) < 0 ||
+				rte_intr_handle_dev_fd_get(intr_handle) < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type) {
+	switch (rte_intr_handle_type_get(intr_handle)) {
 	/* not used at this moment */
 	case RTE_INTR_HANDLE_ALARM:
 		rc = -1;
@@ -429,7 +448,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -441,7 +460,8 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	if (intr_handle &&
+	    rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
 		return 0;
 
 	return -1;
@@ -463,7 +483,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 
 		rte_spinlock_lock(&intr_lock);
 		TAILQ_FOREACH(src, &intr_sources, next)
-			if (src->intr_handle.fd == event_fd)
+			if (rte_intr_handle_fd_get(src->intr_handle) ==
+								event_fd)
 				break;
 		if (src == NULL) {
 			rte_spinlock_unlock(&intr_lock);
@@ -475,7 +496,7 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 		rte_spinlock_unlock(&intr_lock);
 
 		/* set the length to be read dor different handle type */
-		switch (src->intr_handle.type) {
+		switch (rte_intr_handle_type_get(src->intr_handle)) {
 		case RTE_INTR_HANDLE_ALARM:
 			bytes_read = 0;
 			call = true;
@@ -546,7 +567,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 				/* mark for deletion from the queue */
 				ke.flags = EV_DELETE;
 
-				if (intr_source_to_kevent(&src->intr_handle, &ke) < 0) {
+				if (intr_source_to_kevent(src->intr_handle,
+							  &ke) < 0) {
 					RTE_LOG(ERR, EAL, "Cannot convert to kevent\n");
 					rte_spinlock_unlock(&intr_lock);
 					return;
@@ -557,7 +579,9 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 				 */
 				if (kevent(kq, &ke, 1, NULL, 0, NULL) < 0) {
 					RTE_LOG(ERR, EAL, "Error removing fd %d kevent, "
-						"%s\n", src->intr_handle.fd,
+						"%s\n",
+						rte_intr_handle_fd_get(
+							src->intr_handle),
 						strerror(errno));
 					/* removing non-existent even is an expected
 					 * condition in some circumstances
@@ -567,7 +591,8 @@ eal_intr_process_interrupts(struct kevent *events, int nfds)
 
 				TAILQ_REMOVE(&src->callbacks, cb, next);
 				if (cb->ucb_fn)
-					cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+					cb->ucb_fn(src->intr_handle,
+						   cb->cb_arg);
 				free(cb);
 			}
 		}
diff --git a/lib/eal/linux/eal_interrupts.c b/lib/eal/linux/eal_interrupts.c
index 22b3b7bcd9..570eddf088 100644
--- a/lib/eal/linux/eal_interrupts.c
+++ b/lib/eal/linux/eal_interrupts.c
@@ -20,6 +20,7 @@
 #include <stdbool.h>
 
 #include <rte_common.h>
+#include <rte_epoll.h>
 #include <rte_interrupts.h>
 #include <rte_memory.h>
 #include <rte_launch.h>
@@ -82,7 +83,7 @@ struct rte_intr_callback {
 
 struct rte_intr_source {
 	TAILQ_ENTRY(rte_intr_source) next;
-	struct rte_intr_handle intr_handle; /**< interrupt handle */
+	struct rte_intr_handle *intr_handle; /**< interrupt handle */
 	struct rte_intr_cb_list callbacks;  /**< user callbacks */
 	uint32_t active;
 };
@@ -112,7 +113,7 @@ static int
 vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 	int *fd_ptr;
 
 	len = sizeof(irq_set_buf);
@@ -125,13 +126,14 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_handle_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -144,11 +146,11 @@ vfio_enable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -159,7 +161,7 @@ static int
 vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -171,11 +173,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error masking INTx interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -187,11 +190,12 @@ vfio_disable_intx(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL,
-			"Error disabling INTx interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling INTx interrupts for fd %d\n",
+			rte_intr_handle_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -202,6 +206,7 @@ static int
 vfio_ack_intx(const struct rte_intr_handle *intr_handle)
 {
 	struct vfio_irq_set irq_set;
+	int vfio_dev_fd;
 
 	/* unmask INTx */
 	memset(&irq_set, 0, sizeof(irq_set));
@@ -211,9 +216,10 @@ vfio_ack_intx(const struct rte_intr_handle *intr_handle)
 	irq_set.index = VFIO_PCI_INTX_IRQ_INDEX;
 	irq_set.start = 0;
 
-	if (ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
+	vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	if (ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, &irq_set)) {
 		RTE_LOG(ERR, EAL, "Error unmasking INTx interrupts for fd %d\n",
-			intr_handle->fd);
+			rte_intr_handle_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -225,7 +231,7 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
 	int len, ret;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd;
 
 	len = sizeof(irq_set_buf);
 
@@ -236,13 +242,14 @@ vfio_enable_msi(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_handle_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling MSI interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		return -1;
 	}
 	return 0;
@@ -253,7 +260,7 @@ static int
 vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -264,11 +271,13 @@ vfio_disable_msi(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSI_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL,
-			"Error disabling MSI interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling MSI interrupts for fd %d\n",
+			rte_intr_handle_fd_get(intr_handle));
 
 	return ret;
 }
@@ -279,30 +288,34 @@ vfio_enable_msix(const struct rte_intr_handle *intr_handle) {
 	int len, ret;
 	char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd, i;
 
 	len = sizeof(irq_set_buf);
 
 	irq_set = (struct vfio_irq_set *) irq_set_buf;
 	irq_set->argsz = len;
 	/* 0 < irq_set->count < RTE_MAX_RXTX_INTR_VEC_ID + 1 */
-	irq_set->count = intr_handle->max_intr ?
-		(intr_handle->max_intr > RTE_MAX_RXTX_INTR_VEC_ID + 1 ?
-		RTE_MAX_RXTX_INTR_VEC_ID + 1 : intr_handle->max_intr) : 1;
+	irq_set->count = rte_intr_handle_max_intr_get(intr_handle) ?
+		(rte_intr_handle_max_intr_get(intr_handle) >
+		 RTE_MAX_RXTX_INTR_VEC_ID + 1 ?	RTE_MAX_RXTX_INTR_VEC_ID + 1 :
+		 rte_intr_handle_max_intr_get(intr_handle)) : 1;
+
 	irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | VFIO_IRQ_SET_ACTION_TRIGGER;
 	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
 	/* INTR vector offset 0 reserve for non-efds mapping */
-	fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = intr_handle->fd;
-	memcpy(&fd_ptr[RTE_INTR_VEC_RXTX_OFFSET], intr_handle->efds,
-		sizeof(*intr_handle->efds) * intr_handle->nb_efd);
+	fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = rte_intr_handle_fd_get(intr_handle);
+	for (i = 0; i < rte_intr_handle_nb_efd_get(intr_handle); i++)
+		fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] =
+			rte_intr_handle_efds_index_get(intr_handle, i);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling MSI-X interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -314,7 +327,7 @@ static int
 vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -325,11 +338,13 @@ vfio_disable_msix(const struct rte_intr_handle *intr_handle) {
 	irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL,
-			"Error disabling MSI-X interrupts for fd %d\n", intr_handle->fd);
+			"Error disabling MSI-X interrupts for fd %d\n",
+			rte_intr_handle_fd_get(intr_handle));
 
 	return ret;
 }
@@ -342,7 +357,7 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
 	int len, ret;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
 	struct vfio_irq_set *irq_set;
-	int *fd_ptr;
+	int *fd_ptr, vfio_dev_fd;
 
 	len = sizeof(irq_set_buf);
 
@@ -354,13 +369,14 @@ vfio_enable_req(const struct rte_intr_handle *intr_handle)
 	irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
 	irq_set->start = 0;
 	fd_ptr = (int *) &irq_set->data;
-	*fd_ptr = intr_handle->fd;
+	*fd_ptr = rte_intr_handle_fd_get(intr_handle);
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret) {
 		RTE_LOG(ERR, EAL, "Error enabling req interrupts for fd %d\n",
-						intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -373,7 +389,7 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
 {
 	struct vfio_irq_set *irq_set;
 	char irq_set_buf[IRQ_SET_BUF_LEN];
-	int len, ret;
+	int len, ret, vfio_dev_fd;
 
 	len = sizeof(struct vfio_irq_set);
 
@@ -384,11 +400,12 @@ vfio_disable_req(const struct rte_intr_handle *intr_handle)
 	irq_set->index = VFIO_PCI_REQ_IRQ_INDEX;
 	irq_set->start = 0;
 
-	ret = ioctl(intr_handle->vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
+	vfio_dev_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
 
 	if (ret)
 		RTE_LOG(ERR, EAL, "Error disabling req interrupts for fd %d\n",
-			intr_handle->fd);
+			rte_intr_handle_fd_get(intr_handle));
 
 	return ret;
 }
@@ -399,20 +416,22 @@ static int
 uio_intx_intr_disable(const struct rte_intr_handle *intr_handle)
 {
 	unsigned char command_high;
+	int uio_cfg_fd;
 
 	/* use UIO config file descriptor for uio_pci_generic */
-	if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error reading interrupts status for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 	/* disable interrupts */
 	command_high |= 0x4;
-	if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error disabling interrupts for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 
@@ -423,20 +442,22 @@ static int
 uio_intx_intr_enable(const struct rte_intr_handle *intr_handle)
 {
 	unsigned char command_high;
+	int uio_cfg_fd;
 
 	/* use UIO config file descriptor for uio_pci_generic */
-	if (pread(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	if (pread(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error reading interrupts status for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 	/* enable interrupts */
 	command_high &= ~0x4;
-	if (pwrite(intr_handle->uio_cfg_fd, &command_high, 1, 5) != 1) {
+	if (pwrite(uio_cfg_fd, &command_high, 1, 5) != 1) {
 		RTE_LOG(ERR, EAL,
 			"Error enabling interrupts for fd %d\n",
-			intr_handle->uio_cfg_fd);
+			uio_cfg_fd);
 		return -1;
 	}
 
@@ -448,10 +469,11 @@ uio_intr_disable(const struct rte_intr_handle *intr_handle)
 {
 	const int value = 0;
 
-	if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+	if (write(rte_intr_handle_fd_get(intr_handle), &value,
+		  sizeof(value)) < 0) {
 		RTE_LOG(ERR, EAL,
 			"Error disabling interrupts for fd %d (%s)\n",
-			intr_handle->fd, strerror(errno));
+			rte_intr_handle_fd_get(intr_handle), strerror(errno));
 		return -1;
 	}
 	return 0;
@@ -462,10 +484,11 @@ uio_intr_enable(const struct rte_intr_handle *intr_handle)
 {
 	const int value = 1;
 
-	if (write(intr_handle->fd, &value, sizeof(value)) < 0) {
+	if (write(rte_intr_handle_fd_get(intr_handle), &value,
+		  sizeof(value)) < 0) {
 		RTE_LOG(ERR, EAL,
 			"Error enabling interrupts for fd %d (%s)\n",
-			intr_handle->fd, strerror(errno));
+			rte_intr_handle_fd_get(intr_handle), strerror(errno));
 		return -1;
 	}
 	return 0;
@@ -482,7 +505,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 	wake_thread = 0;
 
 	/* first do parameter checking */
-	if (intr_handle == NULL || intr_handle->fd < 0 || cb == NULL) {
+	if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0 ||
+	    cb == NULL) {
 		RTE_LOG(ERR, EAL,
 			"Registering with invalid input parameter\n");
 		return -EINVAL;
@@ -503,7 +527,8 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 
 	/* check if there is at least one callback registered for the fd */
 	TAILQ_FOREACH(src, &intr_sources, next) {
-		if (src->intr_handle.fd == intr_handle->fd) {
+		if (rte_intr_handle_fd_get(src->intr_handle) ==
+					rte_intr_handle_fd_get(intr_handle)) {
 			/* we had no interrupts for this */
 			if (TAILQ_EMPTY(&src->callbacks))
 				wake_thread = 1;
@@ -522,12 +547,22 @@ rte_intr_callback_register(const struct rte_intr_handle *intr_handle,
 			free(callback);
 			ret = -ENOMEM;
 		} else {
-			src->intr_handle = *intr_handle;
-			TAILQ_INIT(&src->callbacks);
-			TAILQ_INSERT_TAIL(&(src->callbacks), callback, next);
-			TAILQ_INSERT_TAIL(&intr_sources, src, next);
-			wake_thread = 1;
-			ret = 0;
+			src->intr_handle = rte_intr_handle_instance_alloc(
+					RTE_INTR_HANDLE_DEFAULT_SIZE, false);
+			if (src->intr_handle == NULL) {
+				RTE_LOG(ERR, EAL, "Can not create intr instance\n");
+				free(callback);
+				ret = -ENOMEM;
+			} else {
+				rte_intr_handle_instance_index_set(
+					src->intr_handle, intr_handle, 0);
+				TAILQ_INIT(&src->callbacks);
+				TAILQ_INSERT_TAIL(&(src->callbacks), callback,
+						  next);
+				TAILQ_INSERT_TAIL(&intr_sources, src, next);
+				wake_thread = 1;
+				ret = 0;
+			}
 		}
 	}
 
@@ -555,7 +590,7 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -565,7 +600,8 @@ rte_intr_callback_unregister_pending(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_handle_fd_get(src->intr_handle) ==
+					rte_intr_handle_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -605,7 +641,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 	struct rte_intr_callback *cb, *next;
 
 	/* do parameter checking first */
-	if (intr_handle == NULL || intr_handle->fd < 0) {
+	if (intr_handle == NULL || rte_intr_handle_fd_get(intr_handle) < 0) {
 		RTE_LOG(ERR, EAL,
 		"Unregistering with invalid input parameter\n");
 		return -EINVAL;
@@ -615,7 +651,8 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 
 	/* check if the insterrupt source for the fd is existent */
 	TAILQ_FOREACH(src, &intr_sources, next)
-		if (src->intr_handle.fd == intr_handle->fd)
+		if (rte_intr_handle_fd_get(src->intr_handle) ==
+					rte_intr_handle_fd_get(intr_handle))
 			break;
 
 	/* No interrupt source registered for the fd */
@@ -646,6 +683,7 @@ rte_intr_callback_unregister(const struct rte_intr_handle *intr_handle,
 		/* all callbacks for that source are removed. */
 		if (TAILQ_EMPTY(&src->callbacks)) {
 			TAILQ_REMOVE(&intr_sources, src, next);
+			rte_intr_handle_instance_free(src->intr_handle);
 			free(src);
 		}
 	}
@@ -677,22 +715,23 @@ rte_intr_callback_unregister_sync(const struct rte_intr_handle *intr_handle,
 int
 rte_intr_enable(const struct rte_intr_handle *intr_handle)
 {
-	int rc = 0;
+	int rc = 0, uio_cfg_fd;
 
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type){
+	switch (rte_intr_handle_type_get(intr_handle)) {
 	/* write to the uio fd to enable the interrupt */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_enable(intr_handle))
@@ -734,7 +773,7 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -757,13 +796,18 @@ rte_intr_enable(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_ack(const struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle && intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	int uio_cfg_fd;
+
+	if (intr_handle && rte_intr_handle_type_get(intr_handle) ==
+							RTE_INTR_HANDLE_VDEV)
 		return 0;
 
-	if (!intr_handle || intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0)
+	uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	if (!intr_handle || rte_intr_handle_fd_get(intr_handle) < 0 ||
+								uio_cfg_fd < 0)
 		return -1;
 
-	switch (intr_handle->type) {
+	switch (rte_intr_handle_type_get(intr_handle)) {
 	/* Both acking and enabling are same for UIO */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_enable(intr_handle))
@@ -796,7 +840,7 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 	/* unknown handle type */
 	default:
 		RTE_LOG(ERR, EAL, "Unknown handle type of fd %d\n",
-			intr_handle->fd);
+			rte_intr_handle_fd_get(intr_handle));
 		return -1;
 	}
 
@@ -806,22 +850,23 @@ rte_intr_ack(const struct rte_intr_handle *intr_handle)
 int
 rte_intr_disable(const struct rte_intr_handle *intr_handle)
 {
-	int rc = 0;
+	int rc = 0, uio_cfg_fd;
 
 	if (intr_handle == NULL)
 		return -1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+	if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV) {
 		rc = 0;
 		goto out;
 	}
 
-	if (intr_handle->fd < 0 || intr_handle->uio_cfg_fd < 0) {
+	uio_cfg_fd = rte_intr_handle_dev_fd_get(intr_handle);
+	if (rte_intr_handle_fd_get(intr_handle) < 0 || uio_cfg_fd < 0) {
 		rc = -1;
 		goto out;
 	}
 
-	switch (intr_handle->type){
+	switch (rte_intr_handle_type_get(intr_handle)) {
 	/* write to the uio fd to disable the interrupt */
 	case RTE_INTR_HANDLE_UIO:
 		if (uio_intr_disable(intr_handle))
@@ -863,7 +908,7 @@ rte_intr_disable(const struct rte_intr_handle *intr_handle)
 	default:
 		RTE_LOG(ERR, EAL,
 			"Unknown handle type of fd %d\n",
-					intr_handle->fd);
+					rte_intr_handle_fd_get(intr_handle));
 		rc = -1;
 		break;
 	}
@@ -896,7 +941,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		}
 		rte_spinlock_lock(&intr_lock);
 		TAILQ_FOREACH(src, &intr_sources, next)
-			if (src->intr_handle.fd ==
+			if (rte_intr_handle_fd_get(src->intr_handle) ==
 					events[n].data.fd)
 				break;
 		if (src == NULL){
@@ -909,7 +954,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		rte_spinlock_unlock(&intr_lock);
 
 		/* set the length to be read dor different handle type */
-		switch (src->intr_handle.type) {
+		switch (rte_intr_handle_type_get(src->intr_handle)) {
 		case RTE_INTR_HANDLE_UIO:
 		case RTE_INTR_HANDLE_UIO_INTX:
 			bytes_read = sizeof(buf.uio_intr_count);
@@ -973,6 +1018,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 					TAILQ_REMOVE(&src->callbacks, cb, next);
 					free(cb);
 				}
+				rte_intr_handle_instance_free(src->intr_handle);
 				free(src);
 				return -1;
 			} else if (bytes_read == 0)
@@ -1012,7 +1058,8 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 			if (cb->pending_delete) {
 				TAILQ_REMOVE(&src->callbacks, cb, next);
 				if (cb->ucb_fn)
-					cb->ucb_fn(&src->intr_handle, cb->cb_arg);
+					cb->ucb_fn(src->intr_handle,
+						   cb->cb_arg);
 				free(cb);
 				rv++;
 			}
@@ -1021,6 +1068,7 @@ eal_intr_process_interrupts(struct epoll_event *events, int nfds)
 		/* all callbacks for that source are removed. */
 		if (TAILQ_EMPTY(&src->callbacks)) {
 			TAILQ_REMOVE(&intr_sources, src, next);
+			rte_intr_handle_instance_free(src->intr_handle);
 			free(src);
 		}
 
@@ -1123,16 +1171,18 @@ eal_intr_thread_main(__rte_unused void *arg)
 				continue; /* skip those with no callbacks */
 			memset(&ev, 0, sizeof(ev));
 			ev.events = EPOLLIN | EPOLLPRI | EPOLLRDHUP | EPOLLHUP;
-			ev.data.fd = src->intr_handle.fd;
+			ev.data.fd = rte_intr_handle_fd_get(src->intr_handle);
 
 			/**
 			 * add all the uio device file descriptor
 			 * into wait list.
 			 */
 			if (epoll_ctl(pfd, EPOLL_CTL_ADD,
-					src->intr_handle.fd, &ev) < 0){
+				rte_intr_handle_fd_get(src->intr_handle),
+								&ev) < 0) {
 				rte_panic("Error adding fd %d epoll_ctl, %s\n",
-					src->intr_handle.fd, strerror(errno));
+				rte_intr_handle_fd_get(src->intr_handle),
+				strerror(errno));
 			}
 			else
 				numfds++;
@@ -1185,7 +1235,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
 	int bytes_read = 0;
 	int nbytes;
 
-	switch (intr_handle->type) {
+	switch (rte_intr_handle_type_get(intr_handle)) {
 	case RTE_INTR_HANDLE_UIO:
 	case RTE_INTR_HANDLE_UIO_INTX:
 		bytes_read = sizeof(buf.uio_intr_count);
@@ -1198,7 +1248,7 @@ eal_intr_proc_rxtx_intr(int fd, const struct rte_intr_handle *intr_handle)
 		break;
 #endif
 	case RTE_INTR_HANDLE_VDEV:
-		bytes_read = intr_handle->efd_counter_size;
+		bytes_read = rte_intr_handle_efd_counter_size_get(intr_handle);
 		/* For vdev, number of bytes to read is set by driver */
 		break;
 	case RTE_INTR_HANDLE_EXT:
@@ -1419,8 +1469,8 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	efd_idx = (vec >= RTE_INTR_VEC_RXTX_OFFSET) ?
 		(vec - RTE_INTR_VEC_RXTX_OFFSET) : vec;
 
-	if (!intr_handle || intr_handle->nb_efd == 0 ||
-	    efd_idx >= intr_handle->nb_efd) {
+	if (!intr_handle || rte_intr_handle_nb_efd_get(intr_handle) == 0 ||
+	    efd_idx >= (unsigned int)rte_intr_handle_nb_efd_get(intr_handle)) {
 		RTE_LOG(ERR, EAL, "Wrong intr vector number.\n");
 		return -EPERM;
 	}
@@ -1428,7 +1478,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 	switch (op) {
 	case RTE_INTR_EVENT_ADD:
 		epfd_op = EPOLL_CTL_ADD;
-		rev = &intr_handle->elist[efd_idx];
+		rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) != RTE_EPOLL_INVALID) {
 			RTE_LOG(INFO, EAL, "Event already been added.\n");
@@ -1442,7 +1492,9 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		epdata->cb_fun = (rte_intr_event_cb_t)eal_intr_proc_rxtx_intr;
 		epdata->cb_arg = (void *)intr_handle;
 		rc = rte_epoll_ctl(epfd, epfd_op,
-				   intr_handle->efds[efd_idx], rev);
+				   rte_intr_handle_efds_index_get(intr_handle,
+								  efd_idx),
+				   rev);
 		if (!rc)
 			RTE_LOG(DEBUG, EAL,
 				"efd %d associated with vec %d added on epfd %d"
@@ -1452,7 +1504,7 @@ rte_intr_rx_ctl(struct rte_intr_handle *intr_handle, int epfd,
 		break;
 	case RTE_INTR_EVENT_DEL:
 		epfd_op = EPOLL_CTL_DEL;
-		rev = &intr_handle->elist[efd_idx];
+		rev = rte_intr_handle_elist_index_get(intr_handle, efd_idx);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) == RTE_EPOLL_INVALID) {
 			RTE_LOG(INFO, EAL, "Event does not exist.\n");
@@ -1477,8 +1529,9 @@ rte_intr_free_epoll_fd(struct rte_intr_handle *intr_handle)
 	uint32_t i;
 	struct rte_epoll_event *rev;
 
-	for (i = 0; i < intr_handle->nb_efd; i++) {
-		rev = &intr_handle->elist[i];
+	for (i = 0; i < (uint32_t)rte_intr_handle_nb_efd_get(intr_handle);
+									i++) {
+		rev = rte_intr_handle_elist_index_get(intr_handle, i);
 		if (__atomic_load_n(&rev->status,
 				__ATOMIC_RELAXED) == RTE_EPOLL_INVALID)
 			continue;
@@ -1498,7 +1551,8 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 
 	assert(nb_efd != 0);
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX) {
+	if (rte_intr_handle_type_get(intr_handle) ==
+						RTE_INTR_HANDLE_VFIO_MSIX) {
 		for (i = 0; i < n; i++) {
 			fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
 			if (fd < 0) {
@@ -1507,21 +1561,34 @@ rte_intr_efd_enable(struct rte_intr_handle *intr_handle, uint32_t nb_efd)
 					errno, strerror(errno));
 				return -errno;
 			}
-			intr_handle->efds[i] = fd;
+
+			if (rte_intr_handle_efds_index_set(intr_handle, i, fd))
+				return -rte_errno;
 		}
-		intr_handle->nb_efd   = n;
-		intr_handle->max_intr = NB_OTHER_INTR + n;
-	} else if (intr_handle->type == RTE_INTR_HANDLE_VDEV) {
+
+		if (rte_intr_handle_nb_efd_set(intr_handle, n))
+			return -rte_errno;
+
+		if (rte_intr_handle_max_intr_set(intr_handle,
+						 NB_OTHER_INTR + n))
+			return -rte_errno;
+	} else if (rte_intr_handle_type_get(intr_handle) ==
+							RTE_INTR_HANDLE_VDEV) {
 		/* only check, initialization would be done in vdev driver.*/
-		if (intr_handle->efd_counter_size >
-		    sizeof(union rte_intr_read_buffer)) {
+		if ((uint64_t)rte_intr_handle_efd_counter_size_get(intr_handle)
+		    > sizeof(union rte_intr_read_buffer)) {
 			RTE_LOG(ERR, EAL, "the efd_counter_size is oversized");
 			return -EINVAL;
 		}
 	} else {
-		intr_handle->efds[0]  = intr_handle->fd;
-		intr_handle->nb_efd   = RTE_MIN(nb_efd, 1U);
-		intr_handle->max_intr = NB_OTHER_INTR;
+		if (rte_intr_handle_efds_index_set(intr_handle, 0,
+					   rte_intr_handle_fd_get(intr_handle)))
+			return -rte_errno;
+		if (rte_intr_handle_nb_efd_set(intr_handle,
+					       RTE_MIN(nb_efd, 1U)))
+			return -rte_errno;
+		if (rte_intr_handle_max_intr_set(intr_handle, NB_OTHER_INTR))
+			return -rte_errno;
 	}
 
 	return 0;
@@ -1533,18 +1600,20 @@ rte_intr_efd_disable(struct rte_intr_handle *intr_handle)
 	uint32_t i;
 
 	rte_intr_free_epoll_fd(intr_handle);
-	if (intr_handle->max_intr > intr_handle->nb_efd) {
-		for (i = 0; i < intr_handle->nb_efd; i++)
-			close(intr_handle->efds[i]);
+	if (rte_intr_handle_max_intr_get(intr_handle) >
+				rte_intr_handle_nb_efd_get(intr_handle)) {
+		for (i = 0; i <
+			(uint32_t)rte_intr_handle_nb_efd_get(intr_handle); i++)
+			close(rte_intr_handle_efds_index_get(intr_handle, i));
 	}
-	intr_handle->nb_efd = 0;
-	intr_handle->max_intr = 0;
+	rte_intr_handle_nb_efd_set(intr_handle, 0);
+	rte_intr_handle_max_intr_set(intr_handle, 0);
 }
 
 int
 rte_intr_dp_is_en(struct rte_intr_handle *intr_handle)
 {
-	return !(!intr_handle->nb_efd);
+	return !(!rte_intr_handle_nb_efd_get(intr_handle));
 }
 
 int
@@ -1553,16 +1622,17 @@ rte_intr_allow_others(struct rte_intr_handle *intr_handle)
 	if (!rte_intr_dp_is_en(intr_handle))
 		return 1;
 	else
-		return !!(intr_handle->max_intr - intr_handle->nb_efd);
+		return !!(rte_intr_handle_max_intr_get(intr_handle) -
+				rte_intr_handle_nb_efd_get(intr_handle));
 }
 
 int
 rte_intr_cap_multiple(struct rte_intr_handle *intr_handle)
 {
-	if (intr_handle->type == RTE_INTR_HANDLE_VFIO_MSIX)
+	if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VFIO_MSIX)
 		return 1;
 
-	if (intr_handle->type == RTE_INTR_HANDLE_VDEV)
+	if (rte_intr_handle_type_get(intr_handle) == RTE_INTR_HANDLE_VDEV)
 		return 1;
 
 	return 0;
-- 
2.18.0


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [EXT] [PATCH] doc: announce library refactor for ABI improvement
  2021-08-26 11:04  4%   ` Bruce Richardson
@ 2021-08-26 15:44  4%     ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-08-26 15:44 UTC (permalink / raw)
  To: Bruce Richardson, Akhil Goyal
  Cc: Ferruh Yigit, Ray Kinsella, dev, Konstantin Ananyev,
	Jerin Jacob Kollanukkaran

On 8/26/21 2:04 PM, Bruce Richardson wrote:
> On Thu, Aug 26, 2021 at 10:46:35AM +0000, Akhil Goyal wrote:
>>> Target is to reduce the public interface surface to improve the ABI
>>> stability and this is preparation for the longer term stable ABI
>>> support.
>>>
>>> Mainly device abstraction layer libraries are impacted because they have
>>> two interfaces, one is public interface to the applications and other is
>>> internal interface to the drivers. Some driver/internal interface
>>> structures/symbols are in the public interface by mistake, this work is
>>> to clean them.
>>> Also some libraries has 'static inline' functions for performance
>>> reasons (like ones in the ethdev), this work plans to split the
>>> structures and hide the part that is not used by inline functions.
>>>
>>> The need of the work for the stable ABI already discussed and planned by
>>> the DPDK technical board:
>>> https://mails.dpdk.org/archives/dev/2021-July/214662.html
>>>
>>> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
>>> ---
>> Acked-by: Akhil Goyal <gakhil@marvell.com>
> 
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> 

Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH v2] ethdev: add namespace
  @ 2021-08-27  1:19  1% ` Ferruh Yigit
  2021-08-30 17:19  1%   ` [dpdk-dev] [PATCH v3] " Ferruh Yigit
  0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-08-27  1:19 UTC (permalink / raw)
  To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
	Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
	Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
	Min Hu (Connor),
	Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Beilei Xing,
	Haiyue Wang, Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko,
	Keith Wiles, Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal,
	Declan Doherty, Ray Kinsella, Radu Nicolau, Hemant Agrawal,
	Sachin Saxena, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao, John W. Linville, Ciara Loftus,
	Shepard Siegel, Ed Czeck, John Miller, Igor Russkikh,
	Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
	Shahed Shaikh, Bruce Richardson, Konstantin Ananyev,
	Ruifeng Wang, Rahul Lakkireddy, Marcin Wojtas, Michal Krawczyk,
	Shai Brandes, Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh,
	Gaetan Rivet, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
	Yisen Zhuang, Lijun Ou, Jingjing Wu, Qiming Yang, Andrew Boyer,
	Rosen Xu, Srisivasubramanian Srinivasan, Jakub Grajciar,
	Zyta Szpak, Liron Himi, Stephen Hemminger, Long Li,
	Martin Spinler, Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa,
	Harman Kalra, Anoob Joseph, Nalla Pradeep,
	Radha Mohan Chintakuntla, Veerasenareddy Burru,
	Devendra Singh Rawat, Jasvinder Singh, Maciej Czekaj, Jian Wang,
	Maxime Coquelin, Chenbo Xia, Yong Wang, Nicolas Chautru,
	David Hunt, Harry van Haaren, Bernard Iremonger, Anatoly Burakov,
	John McNamara, Kirill Rybalchenko, Byron Marohn, Yipeng Wang
  Cc: Ferruh Yigit, dev, Tyler Retzlaff

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.

Internal components switched to new enum & macro names.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-By: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
v2:
* Updated internal components
* Removed deprecation notice
---
 app/proc-info/main.c                          |   8 +-
 app/test-eventdev/test_perf_common.c          |   4 +-
 app/test-eventdev/test_pipeline_common.c      |  12 +-
 app/test-flow-perf/config.h                   |   2 +-
 app/test-pipeline/init.c                      |   8 +-
 app/test-pmd/cmdline.c                        | 290 +++---
 app/test-pmd/config.c                         | 198 ++--
 app/test-pmd/csumonly.c                       |  28 +-
 app/test-pmd/flowgen.c                        |   6 +-
 app/test-pmd/macfwd.c                         |   6 +-
 app/test-pmd/macswap_common.h                 |   6 +-
 app/test-pmd/parameters.c                     |  54 +-
 app/test-pmd/testpmd.c                        |  60 +-
 app/test-pmd/testpmd.h                        |   2 +-
 app/test-pmd/txonly.c                         |   6 +-
 app/test/test_ethdev_link.c                   |  68 +-
 app/test/test_event_eth_rx_adapter.c          |   4 +-
 app/test/test_kni.c                           |   2 +-
 app/test/test_link_bonding.c                  |   4 +-
 app/test/test_link_bonding_mode4.c            |   4 +-
 app/test/test_link_bonding_rssconf.c          |  10 +-
 app/test/test_pmd_perf.c                      |  12 +-
 app/test/virtual_pmd.c                        |  10 +-
 doc/guides/eventdevs/cnxk.rst                 |   2 +-
 doc/guides/eventdevs/octeontx2.rst            |   2 +-
 doc/guides/howto/debug_troubleshoot.rst       |   2 +-
 doc/guides/nics/bnxt.rst                      |  26 +-
 doc/guides/nics/enic.rst                      |   2 +-
 doc/guides/nics/features.rst                  | 116 +--
 doc/guides/nics/fm10k.rst                     |   6 +-
 doc/guides/nics/intel_vf.rst                  |  10 +-
 doc/guides/nics/ixgbe.rst                     |  12 +-
 doc/guides/nics/mlx5.rst                      |   4 +-
 doc/guides/nics/tap.rst                       |   2 +-
 .../generic_segmentation_offload_lib.rst      |   8 +-
 doc/guides/prog_guide/mbuf_lib.rst            |  18 +-
 doc/guides/prog_guide/poll_mode_drv.rst       |   8 +-
 doc/guides/prog_guide/rte_flow.rst            |  34 +-
 doc/guides/prog_guide/rte_security.rst        |   2 +-
 doc/guides/rel_notes/deprecation.rst          |  12 +-
 doc/guides/sample_app_ug/ipsec_secgw.rst      |   4 +-
 doc/guides/testpmd_app_ug/run_app.rst         |   2 +-
 drivers/bus/dpaa/include/process.h            |  16 +-
 drivers/common/cnxk/roc_npc.h                 |   2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |  16 +-
 drivers/net/af_xdp/rte_eth_af_xdp.c           |  12 +-
 drivers/net/ark/ark_ethdev.c                  |  16 +-
 drivers/net/atlantic/atl_ethdev.c             |  90 +-
 drivers/net/atlantic/atl_ethdev.h             |  18 +-
 drivers/net/atlantic/atl_rxtx.c               |   6 +-
 drivers/net/avp/avp_ethdev.c                  |  26 +-
 drivers/net/axgbe/axgbe_dev.c                 |   6 +-
 drivers/net/axgbe/axgbe_ethdev.c              | 102 +-
 drivers/net/axgbe/axgbe_ethdev.h              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   2 +-
 drivers/net/axgbe/axgbe_rxtx.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  16 +-
 drivers/net/bnxt/bnxt.h                       |  68 +-
 drivers/net/bnxt/bnxt_ethdev.c                | 170 ++--
 drivers/net/bnxt/bnxt_flow.c                  |   4 +-
 drivers/net/bnxt/bnxt_hwrm.c                  | 112 +--
 drivers/net/bnxt/bnxt_reps.c                  |   2 +-
 drivers/net/bnxt/bnxt_ring.c                  |   4 +-
 drivers/net/bnxt/bnxt_rxq.c                   |  28 +-
 drivers/net/bnxt/bnxt_rxr.c                   |   4 +-
 drivers/net/bnxt/bnxt_rxtx_vec_avx2.c         |   2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_common.h       |   2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_neon.c         |   2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_sse.c          |   2 +-
 drivers/net/bnxt/bnxt_txr.c                   |   4 +-
 drivers/net/bnxt/bnxt_vnic.c                  |  30 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |   8 +-
 drivers/net/bonding/eth_bond_private.h        |   2 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |  16 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   6 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |  42 +-
 drivers/net/cnxk/cn10k_ethdev.c               |  38 +-
 drivers/net/cnxk/cn10k_rx.c                   |   4 +-
 drivers/net/cnxk/cn10k_tx.c                   |   4 +-
 drivers/net/cnxk/cn9k_ethdev.c                |  56 +-
 drivers/net/cnxk/cn9k_rx.c                    |   4 +-
 drivers/net/cnxk/cn9k_tx.c                    |   4 +-
 drivers/net/cnxk/cnxk_ethdev.c                |  84 +-
 drivers/net/cnxk/cnxk_ethdev.h                |  49 +-
 drivers/net/cnxk/cnxk_ethdev_devargs.c        |   6 +-
 drivers/net/cnxk/cnxk_ethdev_ops.c            | 104 +-
 drivers/net/cnxk/cnxk_link.c                  |  14 +-
 drivers/net/cnxk/cnxk_ptp.c                   |   4 +-
 drivers/net/cnxk/cnxk_rte_flow.c              |   2 +-
 drivers/net/cxgbe/cxgbe.h                     |  48 +-
 drivers/net/cxgbe/cxgbe_ethdev.c              |  42 +-
 drivers/net/cxgbe/cxgbe_main.c                |  12 +-
 drivers/net/cxgbe/sge.c                       |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                | 190 ++--
 drivers/net/dpaa/dpaa_ethdev.h                |  10 +-
 drivers/net/dpaa/dpaa_flow.c                  |  32 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |  34 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              | 148 +--
 drivers/net/dpaa2/dpaa2_ethdev.h              |  12 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   8 +-
 drivers/net/e1000/e1000_ethdev.h              |  18 +-
 drivers/net/e1000/em_ethdev.c                 |  68 +-
 drivers/net/e1000/em_rxtx.c                   |  48 +-
 drivers/net/e1000/igb_ethdev.c                | 158 +--
 drivers/net/e1000/igb_pf.c                    |   2 +-
 drivers/net/e1000/igb_rxtx.c                  | 116 +--
 drivers/net/ena/ena_ethdev.c                  |  70 +-
 drivers/net/ena/ena_ethdev.h                  |   4 +-
 drivers/net/ena/ena_rss.c                     |  66 +-
 drivers/net/enetc/enetc_ethdev.c              |  38 +-
 drivers/net/enic/enic_ethdev.c                |  80 +-
 drivers/net/enic/enic_main.c                  |  40 +-
 drivers/net/enic/enic_res.c                   |  52 +-
 drivers/net/failsafe/failsafe.c               |   8 +-
 drivers/net/failsafe/failsafe_intr.c          |   4 +-
 drivers/net/failsafe/failsafe_ops.c           |  82 +-
 drivers/net/fm10k/fm10k.h                     |   4 +-
 drivers/net/fm10k/fm10k_ethdev.c              | 140 +--
 drivers/net/fm10k/fm10k_rxtx_vec.c            |   6 +-
 drivers/net/hinic/base/hinic_pmd_hwdev.c      |  22 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          | 134 +--
 drivers/net/hinic/hinic_pmd_rx.c              |  36 +-
 drivers/net/hinic/hinic_pmd_rx.h              |  22 +-
 drivers/net/hns3/hns3_dcb.c                   |  14 +-
 drivers/net/hns3/hns3_ethdev.c                | 360 +++----
 drivers/net/hns3/hns3_ethdev.h                |  12 +-
 drivers/net/hns3/hns3_ethdev_vf.c             | 108 +--
 drivers/net/hns3/hns3_flow.c                  |   6 +-
 drivers/net/hns3/hns3_ptp.c                   |   2 +-
 drivers/net/hns3/hns3_rss.c                   | 100 +-
 drivers/net/hns3/hns3_rss.h                   |  28 +-
 drivers/net/hns3/hns3_rxtx.c                  |  30 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/hns3/hns3_rxtx_vec.c              |  10 +-
 drivers/net/i40e/i40e_ethdev.c                | 270 +++---
 drivers/net/i40e/i40e_ethdev.h                |  24 +-
 drivers/net/i40e/i40e_ethdev_vf.c             | 110 +--
 drivers/net/i40e/i40e_flow.c                  |   2 +-
 drivers/net/i40e/i40e_hash.c                  | 156 +--
 drivers/net/i40e/i40e_pf.c                    |  14 +-
 drivers/net/i40e/i40e_rxtx.c                  |  10 +-
 drivers/net/i40e/i40e_rxtx.h                  |   4 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |   2 +-
 drivers/net/i40e/i40e_rxtx_vec_common.h       |   8 +-
 drivers/net/i40e/i40e_vf_representor.c        |  48 +-
 drivers/net/iavf/iavf.h                       |  24 +-
 drivers/net/iavf/iavf_ethdev.c                | 178 ++--
 drivers/net/iavf/iavf_hash.c                  | 300 +++---
 drivers/net/iavf/iavf_rxtx.c                  |   2 +-
 drivers/net/iavf/iavf_rxtx.h                  |  24 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         |   4 +-
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       |   6 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          |   2 +-
 drivers/net/ice/ice_dcf.c                     |   2 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  90 +-
 drivers/net/ice/ice_dcf_vf_representor.c      |  58 +-
 drivers/net/ice/ice_ethdev.c                  | 182 ++--
 drivers/net/ice/ice_ethdev.h                  |  26 +-
 drivers/net/ice/ice_hash.c                    | 268 +++---
 drivers/net/ice/ice_rxtx.c                    |   8 +-
 drivers/net/ice/ice_rxtx_vec_avx2.c           |   2 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c         |   4 +-
 drivers/net/ice/ice_rxtx_vec_common.h         |  26 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            |   2 +-
 drivers/net/igc/igc_ethdev.c                  | 134 +--
 drivers/net/igc/igc_ethdev.h                  |  56 +-
 drivers/net/igc/igc_txrx.c                    |  50 +-
 drivers/net/ionic/ionic_ethdev.c              | 128 +--
 drivers/net/ionic/ionic_ethdev.h              |  12 +-
 drivers/net/ionic/ionic_lif.c                 |  36 +-
 drivers/net/ionic/ionic_rxtx.c                |  10 +-
 drivers/net/ipn3ke/ipn3ke_representor.c       |  70 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              | 302 +++---
 drivers/net/ixgbe/ixgbe_ethdev.h              |  18 +-
 drivers/net/ixgbe/ixgbe_fdir.c                |  24 +-
 drivers/net/ixgbe/ixgbe_flow.c                |   2 +-
 drivers/net/ixgbe/ixgbe_ipsec.c               |  12 +-
 drivers/net/ixgbe/ixgbe_pf.c                  |  38 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                | 252 ++---
 drivers/net/ixgbe/ixgbe_rxtx.h                |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_common.h     |   2 +-
 drivers/net/ixgbe/ixgbe_tm.c                  |  16 +-
 drivers/net/ixgbe/ixgbe_vf_representor.c      |  16 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.c             |  14 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.h             |   4 +-
 drivers/net/kni/rte_eth_kni.c                 |   8 +-
 drivers/net/liquidio/lio_ethdev.c             | 102 +-
 drivers/net/memif/memif_socket.c              |   2 +-
 drivers/net/memif/rte_eth_memif.c             |  14 +-
 drivers/net/mlx4/mlx4_ethdev.c                |  32 +-
 drivers/net/mlx4/mlx4_flow.c                  |  30 +-
 drivers/net/mlx4/mlx4_intr.c                  |   8 +-
 drivers/net/mlx4/mlx4_rxq.c                   |  20 +-
 drivers/net/mlx4/mlx4_txq.c                   |  24 +-
 drivers/net/mlx5/linux/mlx5_ethdev_os.c       |  54 +-
 drivers/net/mlx5/linux/mlx5_os.c              |   6 +-
 drivers/net/mlx5/mlx5.c                       |   4 +-
 drivers/net/mlx5/mlx5.h                       |   2 +-
 drivers/net/mlx5/mlx5_defs.h                  |   6 +-
 drivers/net/mlx5/mlx5_ethdev.c                |   6 +-
 drivers/net/mlx5/mlx5_flow.c                  |  54 +-
 drivers/net/mlx5/mlx5_flow.h                  |  12 +-
 drivers/net/mlx5/mlx5_flow_dv.c               |  44 +-
 drivers/net/mlx5/mlx5_flow_verbs.c            |   4 +-
 drivers/net/mlx5/mlx5_rss.c                   |   2 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  42 +-
 drivers/net/mlx5/mlx5_rxtx_vec.h              |   8 +-
 drivers/net/mlx5/mlx5_tx.c                    |  30 +-
 drivers/net/mlx5/mlx5_txq.c                   |  52 +-
 drivers/net/mlx5/mlx5_vlan.c                  |   4 +-
 drivers/net/mlx5/windows/mlx5_os.c            |   4 +-
 drivers/net/mvneta/mvneta_ethdev.c            |  34 +-
 drivers/net/mvneta/mvneta_ethdev.h            |  12 +-
 drivers/net/mvneta/mvneta_rxtx.c              |   2 +-
 drivers/net/mvpp2/mrvl_ethdev.c               | 116 +--
 drivers/net/netvsc/hn_ethdev.c                |  62 +-
 drivers/net/netvsc/hn_rndis.c                 |  50 +-
 drivers/net/nfb/nfb_ethdev.c                  |  20 +-
 drivers/net/nfb/nfb_rx.c                      |   2 +-
 drivers/net/nfp/nfp_common.c                  | 122 +--
 drivers/net/nfp/nfp_ethdev.c                  |   2 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |   2 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  50 +-
 drivers/net/null/rte_eth_null.c               |  16 +-
 drivers/net/octeontx/octeontx_ethdev.c        |  78 +-
 drivers/net/octeontx/octeontx_ethdev.h        |  32 +-
 drivers/net/octeontx/octeontx_ethdev_ops.c    |  26 +-
 drivers/net/octeontx2/otx2_ethdev.c           |  96 +-
 drivers/net/octeontx2/otx2_ethdev.h           |  66 +-
 drivers/net/octeontx2/otx2_ethdev_devargs.c   |  12 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |  18 +-
 drivers/net/octeontx2/otx2_ethdev_sec.c       |   8 +-
 drivers/net/octeontx2/otx2_flow.c             |   2 +-
 drivers/net/octeontx2/otx2_flow_ctrl.c        |  36 +-
 drivers/net/octeontx2/otx2_flow_parse.c       |   4 +-
 drivers/net/octeontx2/otx2_link.c             |  40 +-
 drivers/net/octeontx2/otx2_mcast.c            |   2 +-
 drivers/net/octeontx2/otx2_ptp.c              |   4 +-
 drivers/net/octeontx2/otx2_rss.c              |  62 +-
 drivers/net/octeontx2/otx2_rx.c               |   4 +-
 drivers/net/octeontx2/otx2_tx.c               |   2 +-
 drivers/net/octeontx2/otx2_vlan.c             |  42 +-
 drivers/net/octeontx_ep/otx_ep_ethdev.c       |   8 +-
 drivers/net/octeontx_ep/otx_ep_rxtx.c         |   8 +-
 drivers/net/pcap/pcap_ethdev.c                |  12 +-
 drivers/net/pfe/pfe_ethdev.c                  |  18 +-
 drivers/net/qede/base/mcp_public.h            |   4 +-
 drivers/net/qede/qede_ethdev.c                | 138 +--
 drivers/net/qede/qede_filter.c                |  10 +-
 drivers/net/qede/qede_rxtx.c                  |   2 +-
 drivers/net/qede/qede_rxtx.h                  |  16 +-
 drivers/net/ring/rte_eth_ring.c               |  20 +-
 drivers/net/sfc/sfc.c                         |  30 +-
 drivers/net/sfc/sfc_ef100_rx.c                |  10 +-
 drivers/net/sfc/sfc_ef100_tx.c                |  20 +-
 drivers/net/sfc/sfc_ef10_essb_rx.c            |   4 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |   8 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |  32 +-
 drivers/net/sfc/sfc_ethdev.c                  |  42 +-
 drivers/net/sfc/sfc_flow.c                    |   2 +-
 drivers/net/sfc/sfc_port.c                    |  54 +-
 drivers/net/sfc/sfc_rx.c                      |  52 +-
 drivers/net/sfc/sfc_tx.c                      |  50 +-
 drivers/net/softnic/rte_eth_softnic.c         |  12 +-
 drivers/net/szedata2/rte_eth_szedata2.c       |  14 +-
 drivers/net/tap/rte_eth_tap.c                 | 104 +-
 drivers/net/tap/tap_rss.h                     |   2 +-
 drivers/net/thunderx/nicvf_ethdev.c           | 100 +-
 drivers/net/thunderx/nicvf_ethdev.h           |  42 +-
 drivers/net/txgbe/txgbe_ethdev.c              | 236 ++---
 drivers/net/txgbe/txgbe_ethdev.h              |  18 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  24 +-
 drivers/net/txgbe/txgbe_fdir.c                |  20 +-
 drivers/net/txgbe/txgbe_flow.c                |   2 +-
 drivers/net/txgbe/txgbe_ipsec.c               |  12 +-
 drivers/net/txgbe/txgbe_pf.c                  |  34 +-
 drivers/net/txgbe/txgbe_rxtx.c                | 312 +++---
 drivers/net/txgbe/txgbe_rxtx.h                |   4 +-
 drivers/net/txgbe/txgbe_tm.c                  |  16 +-
 drivers/net/vhost/rte_eth_vhost.c             |  16 +-
 drivers/net/virtio/virtio_ethdev.c            | 126 +--
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  74 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.h          |  16 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |  16 +-
 examples/bbdev_app/main.c                     |   6 +-
 examples/bond/main.c                          |  14 +-
 examples/distributor/main.c                   |  12 +-
 examples/ethtool/ethtool-app/main.c           |   2 +-
 examples/ethtool/lib/rte_ethtool.c            |  18 +-
 .../pipeline_worker_generic.c                 |  16 +-
 .../eventdev_pipeline/pipeline_worker_tx.c    |  12 +-
 examples/flow_classify/flow_classify.c        |   4 +-
 examples/flow_filtering/main.c                |  16 +-
 examples/ioat/ioatfwd.c                       |   8 +-
 examples/ip_fragmentation/main.c              |  14 +-
 examples/ip_pipeline/link.c                   |  14 +-
 examples/ip_reassembly/main.c                 |  20 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  34 +-
 examples/ipsec-secgw/sa.c                     |   8 +-
 examples/ipv4_multicast/main.c                |   8 +-
 examples/kni/main.c                           |  12 +-
 examples/l2fwd-crypto/main.c                  |  10 +-
 examples/l2fwd-event/l2fwd_common.c           |  10 +-
 examples/l2fwd-event/main.c                   |   2 +-
 examples/l2fwd-jobstats/main.c                |   8 +-
 examples/l2fwd-keepalive/main.c               |   8 +-
 examples/l2fwd/main.c                         |   8 +-
 examples/l3fwd-acl/main.c                     |  20 +-
 examples/l3fwd-graph/main.c                   |  16 +-
 examples/l3fwd-power/main.c                   |  18 +-
 examples/l3fwd/l3fwd_event.c                  |   4 +-
 examples/l3fwd/main.c                         |  20 +-
 examples/link_status_interrupt/main.c         |  10 +-
 .../client_server_mp/mp_server/init.c         |   4 +-
 examples/multi_process/symmetric_mp/main.c    |  14 +-
 examples/ntb/ntb_fwd.c                        |   6 +-
 examples/packet_ordering/main.c               |   4 +-
 .../performance-thread/l3fwd-thread/main.c    |  18 +-
 examples/pipeline/obj.c                       |  14 +-
 examples/ptpclient/ptpclient.c                |  10 +-
 examples/qos_meter/main.c                     |  16 +-
 examples/qos_sched/init.c                     |   6 +-
 examples/rxtx_callbacks/main.c                |   8 +-
 examples/server_node_efd/server/init.c        |   8 +-
 examples/skeleton/basicfwd.c                  |   4 +-
 examples/vhost/main.c                         |  28 +-
 examples/vm_power_manager/main.c              |   6 +-
 examples/vmdq/main.c                          |  20 +-
 examples/vmdq_dcb/main.c                      |  40 +-
 lib/ethdev/rte_ethdev.c                       | 187 ++--
 lib/ethdev/rte_ethdev.h                       | 907 +++++++++++-------
 lib/ethdev/rte_ethdev_core.h                  |   2 +-
 lib/ethdev/rte_flow.h                         |   2 +-
 lib/gso/rte_gso.c                             |  20 +-
 lib/gso/rte_gso.h                             |   4 +-
 lib/mbuf/rte_mbuf_core.h                      |   8 +-
 lib/mbuf/rte_mbuf_dyn.h                       |   2 +-
 337 files changed, 6520 insertions(+), 6295 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9ff3..963b6aa5c589 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
 		}
 
 		ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
-		if (ret == 0 && fc_conf.mode != RTE_FC_NONE)  {
+		if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE)  {
 			printf("\t  -- flow control mode %s%s high %u low %u pause %u%s%s\n",
-			       fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
-			       fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
-			       fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+			       fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+			       fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+			       fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
 			       fc_conf.autoneg ? " auto" : "",
 			       fc_conf.high_water,
 			       fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..41e92143121b 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,14 +668,14 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct test_perf *t = evt_test_priv(test);
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 			.split_hdr_size = 0,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..96c8a5828364 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct rte_eth_rxconf rx_conf;
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
@@ -199,7 +199,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 
 	port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
 	if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	t->internal_port = 1;
 	RTE_ETH_FOREACH_DEV(i) {
@@ -224,7 +224,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 
 		if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
 			local_port_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_RSS_HASH;
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		ret = rte_eth_dev_info_get(i, &dev_info);
 		if (ret != 0) {
@@ -234,9 +234,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 		}
 
 		/* Enable mbuf fast free if PMD has the capability. */
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		rx_conf = dev_info.default_rxconf;
 		rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -178,7 +178,7 @@ app_ports_check_link(void)
 		RTE_LOG(INFO, USER1, "Port %u %s\n",
 			port,
 			link_status_text);
-		if (link.link_status == ETH_LINK_DOWN)
+		if (link.link_status == RTE_ETH_LINK_DOWN)
 			all_ports_up = 0;
 	}
 
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 82253bc75110..d6db93557b95 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1490,51 +1490,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
 	int duplex;
 
 	if (!strcmp(duplexstr, "half")) {
-		duplex = ETH_LINK_HALF_DUPLEX;
+		duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	} else if (!strcmp(duplexstr, "full")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else if (!strcmp(duplexstr, "auto")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		fprintf(stderr, "Unknown duplex parameter\n");
 		return -1;
 	}
 
 	if (!strcmp(speedstr, "10")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
 	} else if (!strcmp(speedstr, "100")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
 	} else {
-		if (duplex != ETH_LINK_FULL_DUPLEX) {
+		if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
 			fprintf(stderr, "Invalid speed/duplex parameters\n");
 			return -1;
 		}
 		if (!strcmp(speedstr, "1000")) {
-			*speed = ETH_LINK_SPEED_1G;
+			*speed = RTE_ETH_LINK_SPEED_1G;
 		} else if (!strcmp(speedstr, "10000")) {
-			*speed = ETH_LINK_SPEED_10G;
+			*speed = RTE_ETH_LINK_SPEED_10G;
 		} else if (!strcmp(speedstr, "25000")) {
-			*speed = ETH_LINK_SPEED_25G;
+			*speed = RTE_ETH_LINK_SPEED_25G;
 		} else if (!strcmp(speedstr, "40000")) {
-			*speed = ETH_LINK_SPEED_40G;
+			*speed = RTE_ETH_LINK_SPEED_40G;
 		} else if (!strcmp(speedstr, "50000")) {
-			*speed = ETH_LINK_SPEED_50G;
+			*speed = RTE_ETH_LINK_SPEED_50G;
 		} else if (!strcmp(speedstr, "100000")) {
-			*speed = ETH_LINK_SPEED_100G;
+			*speed = RTE_ETH_LINK_SPEED_100G;
 		} else if (!strcmp(speedstr, "200000")) {
-			*speed = ETH_LINK_SPEED_200G;
+			*speed = RTE_ETH_LINK_SPEED_200G;
 		} else if (!strcmp(speedstr, "auto")) {
-			*speed = ETH_LINK_SPEED_AUTONEG;
+			*speed = RTE_ETH_LINK_SPEED_AUTONEG;
 		} else {
 			fprintf(stderr, "Unknown speed parameter\n");
 			return -1;
 		}
 	}
 
-	if (*speed != ETH_LINK_SPEED_AUTONEG)
-		*speed |= ETH_LINK_SPEED_FIXED;
+	if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+		*speed |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return 0;
 }
@@ -2185,33 +2185,33 @@ cmd_config_rss_parsed(void *parsed_result,
 	int ret;
 
 	if (!strcmp(res->value, "all"))
-		rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
-			ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
-			ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
-			ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
-			ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+			RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+			RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+			RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+			RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "eth"))
-		rss_conf.rss_hf = ETH_RSS_ETH;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH;
 	else if (!strcmp(res->value, "vlan"))
-		rss_conf.rss_hf = ETH_RSS_VLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
 	else if (!strcmp(res->value, "ip"))
-		rss_conf.rss_hf = ETH_RSS_IP;
+		rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	else if (!strcmp(res->value, "udp"))
-		rss_conf.rss_hf = ETH_RSS_UDP;
+		rss_conf.rss_hf = RTE_ETH_RSS_UDP;
 	else if (!strcmp(res->value, "tcp"))
-		rss_conf.rss_hf = ETH_RSS_TCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_TCP;
 	else if (!strcmp(res->value, "sctp"))
-		rss_conf.rss_hf = ETH_RSS_SCTP;
+		rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
 	else if (!strcmp(res->value, "ether"))
-		rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
 	else if (!strcmp(res->value, "port"))
-		rss_conf.rss_hf = ETH_RSS_PORT;
+		rss_conf.rss_hf = RTE_ETH_RSS_PORT;
 	else if (!strcmp(res->value, "vxlan"))
-		rss_conf.rss_hf = ETH_RSS_VXLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
 	else if (!strcmp(res->value, "geneve"))
-		rss_conf.rss_hf = ETH_RSS_GENEVE;
+		rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
 	else if (!strcmp(res->value, "nvgre"))
-		rss_conf.rss_hf = ETH_RSS_NVGRE;
+		rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
 	else if (!strcmp(res->value, "l3-pre32"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
 	else if (!strcmp(res->value, "l3-pre40"))
@@ -2225,44 +2225,44 @@ cmd_config_rss_parsed(void *parsed_result,
 	else if (!strcmp(res->value, "l3-pre96"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
 	else if (!strcmp(res->value, "l3-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
 	else if (!strcmp(res->value, "l3-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
 	else if (!strcmp(res->value, "l4-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
 	else if (!strcmp(res->value, "l4-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
 	else if (!strcmp(res->value, "l2-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
 	else if (!strcmp(res->value, "l2-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
 	else if (!strcmp(res->value, "l2tpv3"))
-		rss_conf.rss_hf = ETH_RSS_L2TPV3;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
 	else if (!strcmp(res->value, "esp"))
-		rss_conf.rss_hf = ETH_RSS_ESP;
+		rss_conf.rss_hf = RTE_ETH_RSS_ESP;
 	else if (!strcmp(res->value, "ah"))
-		rss_conf.rss_hf = ETH_RSS_AH;
+		rss_conf.rss_hf = RTE_ETH_RSS_AH;
 	else if (!strcmp(res->value, "pfcp"))
-		rss_conf.rss_hf = ETH_RSS_PFCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
 	else if (!strcmp(res->value, "pppoe"))
-		rss_conf.rss_hf = ETH_RSS_PPPOE;
+		rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
 	else if (!strcmp(res->value, "gtpu"))
-		rss_conf.rss_hf = ETH_RSS_GTPU;
+		rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
 	else if (!strcmp(res->value, "ecpri"))
-		rss_conf.rss_hf = ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "mpls"))
-		rss_conf.rss_hf = ETH_RSS_MPLS;
+		rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
 	else if (!strcmp(res->value, "none"))
 		rss_conf.rss_hf = 0;
 	else if (!strcmp(res->value, "level-default")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
 	} else if (!strcmp(res->value, "level-outer")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
 	} else if (!strcmp(res->value, "level-inner")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
 	} else if (!strcmp(res->value, "default"))
 		use_default = 1;
 	else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -3029,10 +3029,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
 	} else
 		printf("The reta size of port %d is %u\n",
 			res->port_id, dev_info.reta_size);
-	if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+	if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		fprintf(stderr,
 			"Currently do not support more than %u entries of redirection table\n",
-			ETH_RSS_RETA_SIZE_512);
+			RTE_ETH_RSS_RETA_SIZE_512);
 		return;
 	}
 
@@ -3149,7 +3149,7 @@ cmd_showport_reta_parsed(void *parsed_result,
 	if (ret != 0)
 		return;
 
-	max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+	max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
 	if (res->size == 0 || res->size > max_reta_size) {
 		fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
 			res->size, max_reta_size);
@@ -3289,7 +3289,7 @@ cmd_config_dcb_parsed(void *parsed_result,
 		return;
 	}
 
-	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+	if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
 		fprintf(stderr,
 			"The invalid number of traffic class, only 4 or 8 allowed.\n");
 		return;
@@ -4293,9 +4293,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
 	enum rte_vlan_type vlan_type;
 
 	if (!strcmp(res->vlan_type, "inner"))
-		vlan_type = ETH_VLAN_TYPE_INNER;
+		vlan_type = RTE_ETH_VLAN_TYPE_INNER;
 	else if (!strcmp(res->vlan_type, "outer"))
-		vlan_type = ETH_VLAN_TYPE_OUTER;
+		vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
 	else {
 		fprintf(stderr, "Unknown vlan type\n");
 		return;
@@ -4632,55 +4632,55 @@ csum_show(int port_id)
 	printf("Parse tunnel is %s\n",
 		(ports[port_id].parse_tunnel) ? "on" : "off");
 	printf("IP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
 	printf("UDP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
 	printf("TCP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
 	printf("SCTP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
 	printf("Outer-Ip checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
 	printf("Outer-Udp checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
 
 	/* display warnings if configuration is not supported by the NIC */
 	ret = eth_dev_info_get_print_err(port_id, &dev_info);
 	if (ret != 0)
 		return;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware UDP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware TCP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 			== 0) {
 		fprintf(stderr,
 			"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4730,8 +4730,8 @@ cmd_csum_parsed(void *parsed_result,
 
 		if (!strcmp(res->proto, "ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_IPV4_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"IP checksum offload is not supported by port %u\n",
@@ -4739,8 +4739,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_UDP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"UDP checksum offload is not supported by port %u\n",
@@ -4748,8 +4748,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "tcp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_TCP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"TCP checksum offload is not supported by port %u\n",
@@ -4757,8 +4757,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "sctp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_SCTP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"SCTP checksum offload is not supported by port %u\n",
@@ -4766,9 +4766,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer IP checksum offload is not supported by port %u\n",
@@ -4776,9 +4776,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer UDP checksum offload is not supported by port %u\n",
@@ -4933,7 +4933,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr, "Error: TSO is not supported by port %d\n",
 			res->port_id);
 		return;
@@ -4941,11 +4941,11 @@ cmd_tso_set_parsed(void *parsed_result,
 
 	if (ports[res->port_id].tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_TCP_TSO;
+						~RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO for non-tunneled packets is disabled\n");
 	} else {
 		ports[res->port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_TCP_TSO;
+						RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO segment size for non-tunneled packets is %d\n",
 			ports[res->port_id].tso_segsz);
 	}
@@ -4957,7 +4957,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr,
 			"Warning: TSO enabled but not supported by port %d\n",
 			res->port_id);
@@ -5028,27 +5028,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
 	if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
 		return dev_info;
 
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
 		fprintf(stderr,
 			"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
@@ -5076,20 +5076,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 	dev_info = check_tunnel_tso_nic_support(res->port_id);
 	if (ports[res->port_id].tunnel_tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-			~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IP_TNL_TSO |
-			  DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 		printf("TSO for tunneled packets is disabled\n");
 	} else {
-		uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-					 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IP_TNL_TSO |
-					 DEV_TX_OFFLOAD_UDP_TNL_TSO);
+		uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 
 		ports[res->port_id].dev_conf.txmode.offloads |=
 			(tso_offloads & dev_info.tx_offload_capa);
@@ -5112,7 +5112,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 			fprintf(stderr,
 				"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
 		if (!(ports[res->port_id].dev_conf.txmode.offloads &
-		      DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+		      RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 			fprintf(stderr,
 				"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
 	}
@@ -7058,9 +7058,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
 		return;
 	}
 
-	if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		rx_fc_en = true;
-	if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		tx_fc_en = true;
 
 	printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7338,12 +7338,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
-			{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+			{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	/* Partial command line, retrieve current configuration */
@@ -7356,11 +7356,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 			return;
 		}
 
-		if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			rx_fc_en = 1;
-		if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			tx_fc_en = 1;
 	}
 
@@ -7428,12 +7428,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
-		{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+		{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -8950,13 +8950,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
 	int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
 	if (!strcmp(res->what,"rxmode")) {
 		if (!strcmp(res->mode, "AUPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
 		else if (!strcmp(res->mode, "ROPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
 		else if (!strcmp(res->mode, "BAM"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
 		else if (!strncmp(res->mode, "MPE",3))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 	}
 
 	RTE_SET_USED(is_on);
@@ -9356,7 +9356,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
 	int ret;
 
 	tunnel_udp.udp_port = res->udp_port;
-	tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+	tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 
 	if (!strcmp(res->what, "add"))
 		ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9422,13 +9422,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
 	tunnel_udp.udp_port = res->udp_port;
 
 	if (!strcmp(res->tunnel_type, "vxlan")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 	} else if (!strcmp(res->tunnel_type, "geneve")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
 	} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
 	} else if (!strcmp(res->tunnel_type, "ecpri")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
 	} else {
 		fprintf(stderr, "Invalid tunnel type\n");
 		return;
@@ -9543,20 +9543,20 @@ cmd_set_mirror_mask_parsed(void *parsed_result,
 
 	memset(&mr_conf, 0, sizeof(struct rte_eth_mirror_conf));
 
-	unsigned int vlan_list[ETH_MIRROR_MAX_VLANS];
+	unsigned int vlan_list[RTE_ETH_MIRROR_MAX_VLANS];
 
 	mr_conf.dst_pool = res->dstpool_id;
 
 	if (!strcmp(res->what, "pool-mirror-up")) {
 		mr_conf.pool_mask = strtoull(res->value, NULL, 16);
-		mr_conf.rule_type = ETH_MIRROR_VIRTUAL_POOL_UP;
+		mr_conf.rule_type = RTE_ETH_MIRROR_VIRTUAL_POOL_UP;
 	} else if (!strcmp(res->what, "pool-mirror-down")) {
 		mr_conf.pool_mask = strtoull(res->value, NULL, 16);
-		mr_conf.rule_type = ETH_MIRROR_VIRTUAL_POOL_DOWN;
+		mr_conf.rule_type = RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN;
 	} else if (!strcmp(res->what, "vlan-mirror")) {
-		mr_conf.rule_type = ETH_MIRROR_VLAN;
+		mr_conf.rule_type = RTE_ETH_MIRROR_VLAN;
 		nb_item = parse_item_list(res->value, "vlan",
-				ETH_MIRROR_MAX_VLANS, vlan_list, 1);
+				RTE_ETH_MIRROR_MAX_VLANS, vlan_list, 1);
 		if (nb_item <= 0)
 			return;
 
@@ -9656,9 +9656,9 @@ cmd_set_mirror_link_parsed(void *parsed_result,
 
 	memset(&mr_conf, 0, sizeof(struct rte_eth_mirror_conf));
 	if (!strcmp(res->what, "uplink-mirror"))
-		mr_conf.rule_type = ETH_MIRROR_UPLINK_PORT;
+		mr_conf.rule_type = RTE_ETH_MIRROR_UPLINK_PORT;
 	else
-		mr_conf.rule_type = ETH_MIRROR_DOWNLINK_PORT;
+		mr_conf.rule_type = RTE_ETH_MIRROR_DOWNLINK_PORT;
 
 	mr_conf.dst_pool = res->dstpool_id;
 
@@ -11823,7 +11823,7 @@ cmd_set_macsec_offload_on_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
 #endif
@@ -11834,7 +11834,7 @@ cmd_set_macsec_offload_on_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MACSEC_INSERT;
+						RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
@@ -11920,7 +11920,7 @@ cmd_set_macsec_offload_off_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_disable(port_id);
 #endif
@@ -11928,7 +11928,7 @@ cmd_set_macsec_offload_off_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_MACSEC_INSERT;
+						~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 31d8ba1b913c..1b0b9ab6d445 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,60 +86,60 @@ static const struct {
 };
 
 const struct rss_type_info rss_type_table[] = {
-	{ "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
-		ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
-		ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
-		ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+	{ "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+		RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+		RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+		RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
 	{ "none", 0 },
-	{ "eth", ETH_RSS_ETH },
-	{ "l2-src-only", ETH_RSS_L2_SRC_ONLY },
-	{ "l2-dst-only", ETH_RSS_L2_DST_ONLY },
-	{ "vlan", ETH_RSS_VLAN },
-	{ "s-vlan", ETH_RSS_S_VLAN },
-	{ "c-vlan", ETH_RSS_C_VLAN },
-	{ "ipv4", ETH_RSS_IPV4 },
-	{ "ipv4-frag", ETH_RSS_FRAG_IPV4 },
-	{ "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
-	{ "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
-	{ "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
-	{ "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
-	{ "ipv6", ETH_RSS_IPV6 },
-	{ "ipv6-frag", ETH_RSS_FRAG_IPV6 },
-	{ "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
-	{ "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
-	{ "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
-	{ "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
-	{ "l2-payload", ETH_RSS_L2_PAYLOAD },
-	{ "ipv6-ex", ETH_RSS_IPV6_EX },
-	{ "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
-	{ "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
-	{ "port", ETH_RSS_PORT },
-	{ "vxlan", ETH_RSS_VXLAN },
-	{ "geneve", ETH_RSS_GENEVE },
-	{ "nvgre", ETH_RSS_NVGRE },
-	{ "ip", ETH_RSS_IP },
-	{ "udp", ETH_RSS_UDP },
-	{ "tcp", ETH_RSS_TCP },
-	{ "sctp", ETH_RSS_SCTP },
-	{ "tunnel", ETH_RSS_TUNNEL },
+	{ "eth", RTE_ETH_RSS_ETH },
+	{ "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+	{ "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+	{ "vlan", RTE_ETH_RSS_VLAN },
+	{ "s-vlan", RTE_ETH_RSS_S_VLAN },
+	{ "c-vlan", RTE_ETH_RSS_C_VLAN },
+	{ "ipv4", RTE_ETH_RSS_IPV4 },
+	{ "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+	{ "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+	{ "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+	{ "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+	{ "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+	{ "ipv6", RTE_ETH_RSS_IPV6 },
+	{ "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+	{ "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+	{ "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+	{ "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+	{ "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+	{ "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+	{ "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+	{ "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+	{ "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+	{ "port", RTE_ETH_RSS_PORT },
+	{ "vxlan", RTE_ETH_RSS_VXLAN },
+	{ "geneve", RTE_ETH_RSS_GENEVE },
+	{ "nvgre", RTE_ETH_RSS_NVGRE },
+	{ "ip", RTE_ETH_RSS_IP },
+	{ "udp", RTE_ETH_RSS_UDP },
+	{ "tcp", RTE_ETH_RSS_TCP },
+	{ "sctp", RTE_ETH_RSS_SCTP },
+	{ "tunnel", RTE_ETH_RSS_TUNNEL },
 	{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
 	{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
 	{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
 	{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
 	{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
 	{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
-	{ "l3-src-only", ETH_RSS_L3_SRC_ONLY },
-	{ "l3-dst-only", ETH_RSS_L3_DST_ONLY },
-	{ "l4-src-only", ETH_RSS_L4_SRC_ONLY },
-	{ "l4-dst-only", ETH_RSS_L4_DST_ONLY },
-	{ "esp", ETH_RSS_ESP },
-	{ "ah", ETH_RSS_AH },
-	{ "l2tpv3", ETH_RSS_L2TPV3 },
-	{ "pfcp", ETH_RSS_PFCP },
-	{ "pppoe", ETH_RSS_PPPOE },
-	{ "gtpu", ETH_RSS_GTPU },
-	{ "ecpri", ETH_RSS_ECPRI },
-	{ "mpls", ETH_RSS_MPLS },
+	{ "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+	{ "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+	{ "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+	{ "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+	{ "esp", RTE_ETH_RSS_ESP },
+	{ "ah", RTE_ETH_RSS_AH },
+	{ "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+	{ "pfcp", RTE_ETH_RSS_PFCP },
+	{ "pppoe", RTE_ETH_RSS_PPPOE },
+	{ "gtpu", RTE_ETH_RSS_GTPU },
+	{ "ecpri", RTE_ETH_RSS_ECPRI },
+	{ "mpls", RTE_ETH_RSS_MPLS },
 	{ NULL, 0 },
 };
 
@@ -474,39 +474,39 @@ static void
 device_infos_display_speeds(uint32_t speed_capa)
 {
 	printf("\n\tDevice speed capability:");
-	if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+	if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
 		printf(" Autonegotiate (all speeds)");
-	if (speed_capa & ETH_LINK_SPEED_FIXED)
+	if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
 		printf(" Disable autonegotiate (fixed speed)  ");
-	if (speed_capa & ETH_LINK_SPEED_10M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
 		printf(" 10 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_10M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M)
 		printf(" 10 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
 		printf(" 100 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M)
 		printf(" 100 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_1G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_1G)
 		printf(" 1 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_2_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
 		printf(" 2.5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_5G)
 		printf(" 5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_10G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10G)
 		printf(" 10 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_20G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_20G)
 		printf(" 20 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_25G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_25G)
 		printf(" 25 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_40G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_40G)
 		printf(" 40 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_50G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G)
 		printf(" 50 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_56G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_56G)
 		printf(" 56 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_100G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G)
 		printf(" 100 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_200G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G)
 		printf(" 200 Gbps  ");
 }
 
@@ -636,9 +636,9 @@ port_infos_display(portid_t port_id)
 
 	printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
 	printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
-	printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+	printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 	       ("full-duplex") : ("half-duplex"));
-	printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+	printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 	       ("On") : ("Off"));
 
 	if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -656,22 +656,22 @@ port_infos_display(portid_t port_id)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 	if (vlan_offload >= 0){
 		printf("VLAN offload: \n");
-		if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
 			printf("  strip on, ");
 		else
 			printf("  strip off, ");
 
-		if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
 			printf("filter on, ");
 		else
 			printf("filter off, ");
 
-		if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
 			printf("extend on, ");
 		else
 			printf("extend off, ");
 
-		if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
 			printf("qinq strip on\n");
 		else
 			printf("qinq strip off\n");
@@ -1166,7 +1166,7 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
 	diag = rte_eth_dev_set_mtu(port_id, mtu);
 	if (diag)
 		fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
-	else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	else if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		/*
 		 * Ether overhead in driver is equal to the difference of
 		 * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
@@ -1175,12 +1175,12 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
 		eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
 		if (mtu > RTE_ETHER_MTU) {
 			rte_port->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			rte_port->dev_conf.rxmode.max_rx_pkt_len =
 						mtu + eth_overhead;
 		} else
 			rte_port->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	}
 }
 
@@ -3118,7 +3118,7 @@ dcb_fwd_config_setup(void)
 	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
 		fwd_lcores[lc_id]->stream_nb = 0;
 		fwd_lcores[lc_id]->stream_idx = sm_id;
-		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+		for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
 			/* if the nb_queue is zero, means this tc is
 			 * not enabled on the POOL
 			 */
@@ -4181,11 +4181,11 @@ vlan_extend_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	} else {
-		vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4211,11 +4211,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4256,11 +4256,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	} else {
-		vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4286,11 +4286,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	} else {
-		vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4360,7 +4360,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 		return;
 
 	if (ports[port_id].dev_conf.txmode.offloads &
-	    DEV_TX_OFFLOAD_QINQ_INSERT) {
+	    RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
 		fprintf(stderr, "Error, as QinQ has been enabled.\n");
 		return;
 	}
@@ -4369,7 +4369,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: vlan insert is not supported by port %d\n",
 			port_id);
@@ -4377,7 +4377,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	ports[port_id].tx_vlan_id = vlan_id;
 }
 
@@ -4396,7 +4396,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: qinq insert not supported by port %d\n",
 			port_id);
@@ -4404,8 +4404,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
-						    DEV_TX_OFFLOAD_QINQ_INSERT);
+	ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+						    RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = vlan_id;
 	ports[port_id].tx_vlan_id_outer = vlan_id_outer;
 }
@@ -4414,8 +4414,8 @@ void
 tx_vlan_reset(portid_t port_id)
 {
 	ports[port_id].dev_conf.txmode.offloads &=
-				~(DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_QINQ_INSERT);
+				~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = 0;
 	ports[port_id].tx_vlan_id_outer = 0;
 }
@@ -4821,7 +4821,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
 	ret = eth_link_get_nowait_print_err(port_id, &link);
 	if (ret < 0)
 		return 1;
-	if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+	if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
 	    rate > link.link_speed) {
 		fprintf(stderr,
 			"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 38cc256533b6..454a2d41c366 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
 			ol_flags |= PKT_TX_IP_CKSUM;
 		} else {
-			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 				ol_flags |= PKT_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
-			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 				ol_flags |= PKT_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
 			ol_flags |= PKT_TX_TCP_SEG;
-		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+		else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 			ol_flags |= PKT_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			((char *)l3_hdr + info->l3_len);
 		/* sctp payload must be a multiple of 4 to be
 		 * offloaded */
-		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+		if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
 			ol_flags |= PKT_TX_SCTP_CKSUM;
 		} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ipv4_hdr->hdr_checksum = 0;
 		ol_flags |= PKT_TX_OUTER_IPV4;
 
-		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (tx_offloads	& RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ol_flags |= PKT_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
-	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
 		if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
 			udp_hdr->dgram_cksum
 				= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			if (info.tunnel_tso_segsz ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+			     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+			     RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				m->outer_l2_len = info.outer_l2_len;
 				m->outer_l3_len = info.outer_l3_len;
 				m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					rte_be_to_cpu_16(info.outer_ethertype),
 					info.outer_l3_len);
 			/* dump tx packet info */
-			if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					    DEV_TX_OFFLOAD_UDP_CKSUM |
-					    DEV_TX_OFFLOAD_TCP_CKSUM |
-					    DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+			if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
 				info.tso_segsz != 0)
 				printf("tx: m->l2_len=%d m->l3_len=%d "
 					"m->l4_len=%d\n",
 					m->l2_len, m->l3_len, m->l4_len);
 			if (info.is_tunnel == 1) {
 				if ((tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
 				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 9348618d0f8d..7d658d002cb6 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,11 +100,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 	vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags |= PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 0568ea794d48..1d878ba0a694 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	fs->rx_packets += nb_rx;
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
 {
 	uint64_t ol_flags = 0;
 
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
 			PKT_TX_VLAN : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
 			PKT_TX_QINQ : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
 			PKT_TX_MACSEC : 0;
 
 	return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 7c13210f04aa..1d0187723532 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -475,29 +475,29 @@ parse_event_printing_config(const char *optarg, int enable)
 static int
 parse_link_speed(int n)
 {
-	uint32_t speed = ETH_LINK_SPEED_FIXED;
+	uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
 
 	switch (n) {
 	case 1000:
-		speed |= ETH_LINK_SPEED_1G;
+		speed |= RTE_ETH_LINK_SPEED_1G;
 		break;
 	case 10000:
-		speed |= ETH_LINK_SPEED_10G;
+		speed |= RTE_ETH_LINK_SPEED_10G;
 		break;
 	case 25000:
-		speed |= ETH_LINK_SPEED_25G;
+		speed |= RTE_ETH_LINK_SPEED_25G;
 		break;
 	case 40000:
-		speed |= ETH_LINK_SPEED_40G;
+		speed |= RTE_ETH_LINK_SPEED_40G;
 		break;
 	case 50000:
-		speed |= ETH_LINK_SPEED_50G;
+		speed |= RTE_ETH_LINK_SPEED_50G;
 		break;
 	case 100000:
-		speed |= ETH_LINK_SPEED_100G;
+		speed |= RTE_ETH_LINK_SPEED_100G;
 		break;
 	case 200000:
-		speed |= ETH_LINK_SPEED_200G;
+		speed |= RTE_ETH_LINK_SPEED_200G;
 		break;
 	case 100:
 	case 10:
@@ -912,13 +912,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
 				if (!strcmp(optarg, "64K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_64K;
+						RTE_ETH_FDIR_PBALLOC_64K;
 				else if (!strcmp(optarg, "128K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_128K;
+						RTE_ETH_FDIR_PBALLOC_128K;
 				else if (!strcmp(optarg, "256K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_256K;
+						RTE_ETH_FDIR_PBALLOC_256K;
 				else
 					rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
 						 " must be: 64K or 128K or 256K\n",
@@ -960,34 +960,34 @@ launch_args_parse(int argc, char** argv)
 			}
 #endif
 			if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 			if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
-				rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 			if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
-				rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 			if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
-				rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-rx-timestamp"))
-				rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 			if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-filter"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-extend"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-qinq-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
 				rx_drop_en = 1;
@@ -1009,13 +1009,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
 				set_pkt_forwarding_mode(optarg);
 			if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
-				rss_hf = ETH_RSS_IP;
+				rss_hf = RTE_ETH_RSS_IP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
-				rss_hf = ETH_RSS_UDP;
+				rss_hf = RTE_ETH_RSS_UDP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
-				rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
-				rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rxq")) {
 				n = atoi(optarg);
 				if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1386,12 +1386,12 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
 				n = strtoul(optarg, &end, 16);
-				if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+				if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
 					rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
 				else
 					rte_exit(EXIT_FAILURE,
 						 "rx-mq-mode must be >= 0 and <= %d\n",
-						 ETH_MQ_RX_VMDQ_DCB_RSS);
+						 RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
 			}
 			if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
 				record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 6cbe9ba3c893..30bf897d6da8 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -337,7 +337,7 @@ uint64_t noisy_lkup_num_reads_writes;
 /*
  * Receive Side Scaling (RSS) configuration.
  */
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
 
 /*
  * Port topology configuration
@@ -454,12 +454,12 @@ struct rte_eth_rxmode rx_mode = {
 };
 
 struct rte_eth_txmode tx_mode = {
-	.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+	.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
 };
 
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
 	.mode = RTE_FDIR_MODE_NONE,
-	.pballoc = RTE_FDIR_PBALLOC_64K,
+	.pballoc = RTE_ETH_FDIR_PBALLOC_64K,
 	.status = RTE_FDIR_REPORT_STATUS,
 	.mask = {
 		.vlan_tci_mask = 0xFFEF,
@@ -513,7 +513,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 /*
  * hexadecimal bitmask of RX mq mode can be enabled.
  */
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
 
 /*
  * Used to set forced link speed
@@ -1437,9 +1437,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
 			"Updating jumbo frame offload failed for port %u\n",
 			pid);
 
-	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		port->dev_conf.txmode.offloads &=
-			~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Apply Rx offloads configuration */
 	for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1566,8 +1566,8 @@ init_config(void)
 
 	init_port_config();
 
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
 	/*
 	 * Records which Mbuf pool to use by each logical core, if needed.
 	 */
@@ -3154,7 +3154,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3414,17 +3414,17 @@ update_jumbo_frame_offload(portid_t portid)
 		port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
 
 	if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
-		rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		on = false;
 	} else {
-		if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+		if ((port->dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0) {
 			fprintf(stderr,
 				"Frame size (%u) is not supported by port %u\n",
 				port->dev_conf.rxmode.max_rx_pkt_len,
 				portid);
 			return -1;
 		}
-		rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		on = true;
 	}
 
@@ -3436,16 +3436,16 @@ update_jumbo_frame_offload(portid_t portid)
 		/* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
 		for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
 			if (on)
-				port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+				port->rx_conf[qid].offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			else
-				port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+				port->rx_conf[qid].offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		}
 	}
 
 	/* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
 	 * if unset do it here
 	 */
-	if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0) {
 		ret = rte_eth_dev_set_mtu(portid,
 				port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
 		if (ret)
@@ -3486,9 +3486,9 @@ init_port_config(void)
 			if( port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0)
 				port->dev_conf.rxmode.mq_mode =
 					(enum rte_eth_rx_mq_mode)
-						(rx_mq_mode & ETH_MQ_RX_RSS);
+						(rx_mq_mode & RTE_ETH_MQ_RX_RSS);
 			else
-				port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+				port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 		}
 
 		rxtx_port_config(port);
@@ -3575,9 +3575,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		vmdq_rx_conf->enable_default_pool = 0;
 		vmdq_rx_conf->default_pool = 0;
 		vmdq_rx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 		vmdq_tx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 
 		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
 		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3585,7 +3585,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 			vmdq_rx_conf->pool_map[i].pools =
 				1 << (i % vmdq_rx_conf->nb_queue_pools);
 		}
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
 			vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
 		}
@@ -3593,8 +3593,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		/* set DCB mode of RX and TX of multiple queues */
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+					(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 	} else {
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&eth_conf->rx_adv_conf.dcb_rx_conf;
@@ -3610,23 +3610,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		rx_conf->nb_tcs = num_tcs;
 		tx_conf->nb_tcs = num_tcs;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			rx_conf->dcb_tc[i] = i % num_tcs;
 			tx_conf->dcb_tc[i] = i % num_tcs;
 		}
 
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+					(rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
 		eth_conf->rx_adv_conf.rss_conf = rss_conf;
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
 	}
 
 	if (pfc_en)
 		eth_conf->dcb_capability_en =
-				ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+				RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
 	else
-		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+		eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
 
 	return 0;
 }
@@ -3653,7 +3653,7 @@ init_port_dcb_config(portid_t pid,
 	retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
 	if (retval < 0)
 		return retval;
-	port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	/* re-configure the device . */
 	retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3703,7 +3703,7 @@ init_port_dcb_config(portid_t pid,
 
 	rxtx_port_config(rte_port);
 	/* VLAN filter */
-	rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	for (i = 0; i < RTE_DIM(vlan_tags); i++)
 		rx_vft_set(pid, vlan_tags[i], 1);
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 16a3598e48c5..e4ad8a6a7cff 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -446,7 +446,7 @@ extern lcoreid_t bitrate_lcore_id;
 extern uint8_t bitrate_enabled;
 #endif
 
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
 
 /*
  * Configuration of packet segments used to scatter received packets
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f5d340..5409d7a0deb0 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -352,11 +352,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..7c0ebec5bd4b 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
 	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
 		text, strlen(text), "Invalid default link status string");
 
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_FIXED;
-	link_status.link_speed = ETH_SPEED_NUM_10M,
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_10M,
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_NONE;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
 		"string with HDX");
 
 	/* test max str len */
-	link_status.link_speed = ETH_SPEED_NUM_200G;
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_AUTONEG;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
 	int ret = 0;
 	struct rte_eth_link link_status = {
 		.link_speed = 55555,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
 		const char *value;
 		uint32_t link_speed;
 	} speed_str_map[] = {
-		{ "None",   ETH_SPEED_NUM_NONE },
-		{ "10 Mbps",  ETH_SPEED_NUM_10M },
-		{ "100 Mbps", ETH_SPEED_NUM_100M },
-		{ "1 Gbps",   ETH_SPEED_NUM_1G },
-		{ "2.5 Gbps", ETH_SPEED_NUM_2_5G },
-		{ "5 Gbps",   ETH_SPEED_NUM_5G },
-		{ "10 Gbps",  ETH_SPEED_NUM_10G },
-		{ "20 Gbps",  ETH_SPEED_NUM_20G },
-		{ "25 Gbps",  ETH_SPEED_NUM_25G },
-		{ "40 Gbps",  ETH_SPEED_NUM_40G },
-		{ "50 Gbps",  ETH_SPEED_NUM_50G },
-		{ "56 Gbps",  ETH_SPEED_NUM_56G },
-		{ "100 Gbps", ETH_SPEED_NUM_100G },
-		{ "200 Gbps", ETH_SPEED_NUM_200G },
-		{ "Unknown",  ETH_SPEED_NUM_UNKNOWN },
+		{ "None",   RTE_ETH_SPEED_NUM_NONE },
+		{ "10 Mbps",  RTE_ETH_SPEED_NUM_10M },
+		{ "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+		{ "1 Gbps",   RTE_ETH_SPEED_NUM_1G },
+		{ "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+		{ "5 Gbps",   RTE_ETH_SPEED_NUM_5G },
+		{ "10 Gbps",  RTE_ETH_SPEED_NUM_10G },
+		{ "20 Gbps",  RTE_ETH_SPEED_NUM_20G },
+		{ "25 Gbps",  RTE_ETH_SPEED_NUM_25G },
+		{ "40 Gbps",  RTE_ETH_SPEED_NUM_40G },
+		{ "50 Gbps",  RTE_ETH_SPEED_NUM_50G },
+		{ "56 Gbps",  RTE_ETH_SPEED_NUM_56G },
+		{ "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+		{ "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+		{ "Unknown",  RTE_ETH_SPEED_NUM_UNKNOWN },
 		{ "Invalid",   50505 }
 	};
 
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index 9198767b4194..bb7917010d62 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -106,7 +106,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 		.intr_conf = {
 			.rxq = 1,
@@ -121,7 +121,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 	};
 
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
 
 static const struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..23c024aa1b0c 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,12 +134,12 @@ static uint16_t vlan_id = 0x100;
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7adc7..1556f14d6921 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,12 +107,12 @@ static struct link_bonding_unittest_params test_params  = {
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..93caaf986c2f 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -80,29 +80,29 @@ static struct link_bonding_rssconf_unittest_params test_params  = {
  */
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
 static struct rte_eth_conf rss_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IPV6,
+			.rss_hf = RTE_ETH_RSS_IPV6,
 		},
 	},
 	.lpbk_mode = 0,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..da7b7ad1f7cc 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,12 +62,12 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 1,  /* enable loopback */
 };
@@ -156,7 +156,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -823,7 +823,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		/* bulk alloc rx, full-featured tx */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "hybrid")) {
 		/* bulk alloc rx, vector tx
@@ -832,13 +832,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		 */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "full")) {
 		/* full feature rx,tx pair */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		return 0;
 	}
 
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7036f401ed95..6eecfa385537 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int  virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
 	void *pkt = NULL;
 	struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 	while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
 		rte_pktmbuf_free(pkt);
@@ -178,7 +178,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
 		int wait_to_complete __rte_unused)
 {
 	if (!bonded_eth_dev->data->dev_started)
-		bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -574,9 +574,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
 	eth_dev->data->nb_rx_queues = (uint16_t)1;
 	eth_dev->data->nb_tx_queues = (uint16_t)1;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
-	eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue configuration.
 - HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue config.
 
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..13f30e39363e 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,7 +71,7 @@ RX Port and associated core :numref:`dtg_rx_rate`.
    * Identify if port Speed and Duplex is matching to desired values with
      ``rte_eth_link_get``.
 
-   * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
+   * Check ``RTE_ETH_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
 
    * Check promiscuous mode if the drops do not occur for unique MAC address
      with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3bc..77827e750195 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,22 +877,22 @@ processing. This improved performance is derived from a number of optimizations:
     * TX: only the following reduced set of transmit offloads is supported in
       vector mode::
 
-       DEV_TX_OFFLOAD_MBUF_FAST_FREE
+       RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 
     * RX: only the following reduced set of receive offloads is supported in
       vector mode (note that jumbo MTU is allowed only when the MTU setting
-      does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
-       DEV_RX_OFFLOAD_VLAN_STRIP
-       DEV_RX_OFFLOAD_KEEP_CRC
-       DEV_RX_OFFLOAD_JUMBO_FRAME
-       DEV_RX_OFFLOAD_IPV4_CKSUM
-       DEV_RX_OFFLOAD_UDP_CKSUM
-       DEV_RX_OFFLOAD_TCP_CKSUM
-       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
-       DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
-       DEV_RX_OFFLOAD_RSS_HASH
-       DEV_RX_OFFLOAD_VLAN_FILTER
+      does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+       RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+       RTE_ETH_RX_OFFLOAD_KEEP_CRC
+       RTE_ETH_RX_OFFLOAD_JUMBO_FRAME
+       RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_RSS_HASH
+       RTE_ETH_RX_OFFLOAD_VLAN_FILTER
 
 The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
 vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
 .. code-block:: console
 
      vlan_offload = rte_eth_dev_get_vlan_offload(port);
-     vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+     vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
      rte_eth_dev_set_vlan_offload(port, vlan_offload);
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index a96e12d15515..7f7d6ae45658 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
 
 Supports getting the speed capabilities that the current device is capable of.
 
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
 * **[related]  API**: ``rte_eth_dev_info_get()``.
 
 
@@ -101,11 +101,11 @@ Supports Rx interrupts.
 Lock-free Tx queue
 ------------------
 
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -117,8 +117,8 @@ Fast mbuf free
 Supports optimization for fast release of mbufs following successful Tx.
 Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
 
 
 .. _nic_features_free_tx_mbuf_on_demand:
@@ -165,7 +165,7 @@ Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -178,7 +178,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,12 +206,12 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
 
@@ -222,12 +222,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -288,9 +288,9 @@ RSS hash
 
 Supports RSS hashing on RX.
 
-* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
@@ -303,7 +303,7 @@ Inner RSS
 Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
-* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
 
@@ -340,7 +340,7 @@ VMDq
 
 Supports Virtual Machine Device Queues (VMDq).
 
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
 * **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -363,7 +363,7 @@ DCB
 
 Supports Data Center Bridging (DCB).
 
-* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
 * **[uses]       user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -379,7 +379,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -428,12 +428,12 @@ Supports inline crypto processing defined by rte_security library to perform cry
 operations of security protocol while packet is received in NIC. NIC is not aware
 of protocol operations. See Security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -449,13 +449,13 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
 packet is received at NIC. The NIC is capable of understanding the security
 protocol operations. See security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
   ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -469,7 +469,7 @@ CRC offload
 Supports CRC stripping by hardware.
 A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
 
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
 
 
 .. _nic_features_vlan_offload:
@@ -479,13 +479,13 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -497,14 +497,14 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
   ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_fec:
@@ -518,7 +518,7 @@ information to correct the bit errors generated during data packet transmission
 improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
 
 * **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides]   rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides]   rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
 * **[related]    API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
 
 
@@ -529,16 +529,16 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -548,8 +548,8 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -557,8 +557,8 @@ Supports L4 checksum offload.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 
 .. _nic_features_hw_timestamp:
 
@@ -567,10 +567,10 @@ Timestamp offload
 
 Supports Timestamp.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
 
 .. _nic_features_macsec_offload:
@@ -580,11 +580,11 @@ MACsec offload
 
 Supports MACsec.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -594,16 +594,16 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
@@ -613,15 +613,15 @@ Inner L4 checksum
 
 Supports inner packet L4 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
   ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 
 
 .. _nic_features_packet_type_parsing:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..3dff65d89b6d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
 To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
 will be checked:
 
-*   ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+*   ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
 
-*   ``DEV_RX_OFFLOAD_CHECKSUM``
+*   ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
 
-*   ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+*   ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
 
 *   ``fdir_conf->mode``
 
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index fcea8151bf3c..e60e3b2a761d 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -222,21 +222,21 @@ For example,
     *   If the max number of VFs (max_vfs) is set in the range of 1 to 32:
 
         If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
 
         If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
 
     *   If the max number of VFs (max_vfs) is in the range of 33 to 64:
 
         If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
         as ``rxq`` is not correct at this case;
 
-        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
         and each VF have 2 Rx queues;
 
-    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
-    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+    On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+    or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
     It also needs config VF RSS information like hash function, RSS key, RSS key length.
 
 .. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index b82e63438285..24fbccc982f5 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -69,13 +69,13 @@ Other features are supported using optional MACRO configuration. They include:
 
 To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
 
-*   DEV_RX_OFFLOAD_VLAN_STRIP
+*   RTE_ETH_RX_OFFLOAD_VLAN_STRIP
 
-*   DEV_RX_OFFLOAD_VLAN_EXTEND
+*   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
 
-*   DEV_RX_OFFLOAD_CHECKSUM
+*   RTE_ETH_RX_OFFLOAD_CHECKSUM
 
-*   DEV_RX_OFFLOAD_HEADER_SPLIT
+*   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
 
 *   dev_conf
 
@@ -143,13 +143,13 @@ l3fwd
 ~~~~~
 
 When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
 Otherwise, by default, RX vPMD is disabled.
 
 load_balancer
 ~~~~~~~~~~~~~
 
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
 In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
 
 
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d882..6facb68b9545 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
 
 - CRC:
 
-  - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+  - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
     for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
     The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
 
@@ -607,7 +607,7 @@ Driver options
   small-packet traffic.
 
   When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
-  user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+  user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
   configure large stride size enough to accommodate max_rx_pkt_len as long as
   device allows. Note that this can waste system memory compared to enabling Rx
   scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
 be added in next releases
 
 TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
 **Known limitation:** TAP supports all of the above hash functions together
 and not in partial combinations.
 
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
 
    - the bit mask of required GSO types. The GSO library uses the same macros as
      those that describe a physical device's TX offloading capabilities (i.e.
-     ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+     ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
      wants to segment TCP/IPv4 packets, it should set gso_types to
-     ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
-     supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
-     ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+     ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+     supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+     ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
      allowed.
 
    - a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
     mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
 
 - calculate checksum of out_ip and out_udp::
 
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
-  and DEV_TX_OFFLOAD_UDP_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+  and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
 
 - calculate checksum of in_ip::
 
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
   This is similar to case 2), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
-  DEV_TX_OFFLOAD_TCP_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+  RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
-  DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+  RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
 
 The list of flags and their precise meaning is described in the mbuf API
 documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
 
 Avoiding lock contention is a key issue in a multi-core environment.
 To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
 In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
 
 To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
 
 Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
 
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
 concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
 
 *  Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
 *  In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
    enables more scaling as all workers can send the packets.
 
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
 
 Device Identification, Ownership and Configuration
 --------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
 The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
 Supported offloads can be either per-port or per-queue.
 
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
 Any requested offloading by an application must be within the device capabilities.
 Any offloading is disabled by default if it is not set in the parameter
 ``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c05..1bac8f04b96e 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1835,23 +1835,23 @@ only matching traffic goes through.
 
 .. table:: RSS
 
-   +---------------+---------------------------------------------+
-   | Field         | Value                                       |
-   +===============+=============================================+
-   | ``func``      | RSS hash function to apply                  |
-   +---------------+---------------------------------------------+
-   | ``level``     | encapsulation level for ``types``           |
-   +---------------+---------------------------------------------+
-   | ``types``     | specific RSS hash types (see ``ETH_RSS_*``) |
-   +---------------+---------------------------------------------+
-   | ``key_len``   | hash key length in bytes                    |
-   +---------------+---------------------------------------------+
-   | ``queue_num`` | number of entries in ``queue``              |
-   +---------------+---------------------------------------------+
-   | ``key``       | hash key                                    |
-   +---------------+---------------------------------------------+
-   | ``queue``     | queue indices to use                        |
-   +---------------+---------------------------------------------+
+   +---------------+-------------------------------------------------+
+   | Field         | Value                                           |
+   +===============+=================================================+
+   | ``func``      | RSS hash function to apply                      |
+   +---------------+-------------------------------------------------+
+   | ``level``     | encapsulation level for ``types``               |
+   +---------------+-------------------------------------------------+
+   | ``types``     | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+   +---------------+-------------------------------------------------+
+   | ``key_len``   | hash key length in bytes                        |
+   +---------------+-------------------------------------------------+
+   | ``queue_num`` | number of entries in ``queue``                  |
+   +---------------+-------------------------------------------------+
+   | ``key``       | hash key                                        |
+   +---------------+-------------------------------------------------+
+   | ``queue``     | queue indices to use                            |
+   +---------------+-------------------------------------------------+
 
 Action: ``PF``
 ^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index f72bc8a78fa6..e3bd451917f0 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -560,7 +560,7 @@ created by the application is attached to the security session by the API
 
 For Inline Crypto and Inline protocol offload, device specific defined metadata is
 updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
 
 For inline protocol offloaded ingress traffic, the application can register a
 pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b0b..20159a1c9a90 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,22 +58,16 @@ Deprecation Notices
   ``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
   type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
   usage in following public struct hierarchy:
-  ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+  ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
   Need to identify this kind of usages and fix in 20.11, otherwise this blocks
   us extending existing enum/define.
   One solution can be using a fixed size array instead of ``.*MAX.*`` value.
 
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
-  Macros will be added for backward compatibility.
-  Backward compatibility macros will be removed on v22.11.
-  A few old backward compatibility macros from 2013 that does not have
-  proper prefix will be removed on v21.11.
-
 * ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
   and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
   will be removed in DPDK 20.11.
 
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
   This will allow application to enable or disable PMDs from updating
   ``rte_mbuf::hash::fdir``.
   This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
@@ -98,7 +92,7 @@ Deprecation Notices
   either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
 
   An application may need to configure device for a specific Rx packet size, like for
-  cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
+  cases ``RTE_ETH_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
   can't be bigger than Rx buffer size.
   To cover these cases an application needs to know the device packet overhead to be
   able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
     device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
 
 *   ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the RX HW offload capabilities.
     By default all HW RX offloads are enabled.
 
 *   ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the TX HW offload capabilities.
     By default all HW TX offloads are enabled.
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 6061674239f4..d7f5951d4639 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -526,7 +526,7 @@ The command line options are:
     Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
     The default value is 0x7::
 
-       ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+       RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
 
 *   ``--record-core-cycles``
 
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
 struct usdpaa_ioctl_link_status_args_old {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
-	/* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+	/* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
 	int     link_autoneg;
 
 };
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
 struct usdpaa_ioctl_update_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_update_link_speed {
 	/* network device node name*/
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
 };
 
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index bab25fd72eee..360bf75d3861 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -153,7 +153,7 @@ enum roc_npc_rss_hash_function {
 struct roc_npc_action_rss {
 	enum roc_npc_rss_hash_function func;
 	uint32_t level;
-	uint64_t types;	       /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types;	       /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len;      /**< Hash key length in bytes. */
 	uint32_t queue_num;    /**< Number of entries in @p queue. */
 	const uint8_t *key;    /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index b73b211fd249..fb5d549e6227 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -91,10 +91,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -265,7 +265,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -295,7 +295,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 		internals->tx_queue[i].sockfd = -1;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -316,8 +316,8 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
 	dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	return 0;
 }
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 74ffa4511284..dbf745852716 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 /* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -654,7 +654,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -663,7 +663,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
 		.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
 
 	/* ARK PMD supports all line rates, how do we indicate that here ?? */
-	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
-				ETH_LINK_SPEED_10G |
-				ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G);
-
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+				RTE_ETH_LINK_SPEED_10G |
+				RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G);
+
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return 0;
 }
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..5af1cff3770e 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,21 +154,21 @@ static struct rte_pci_driver rte_atl_pmd = {
 	.remove = eth_atl_pci_remove,
 };
 
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
-			| DEV_RX_OFFLOAD_IPV4_CKSUM \
-			| DEV_RX_OFFLOAD_UDP_CKSUM \
-			| DEV_RX_OFFLOAD_TCP_CKSUM \
-			| DEV_RX_OFFLOAD_JUMBO_FRAME \
-			| DEV_RX_OFFLOAD_MACSEC_STRIP \
-			| DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
-			| DEV_TX_OFFLOAD_IPV4_CKSUM \
-			| DEV_TX_OFFLOAD_UDP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_TSO \
-			| DEV_TX_OFFLOAD_MACSEC_INSERT \
-			| DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+			| RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_JUMBO_FRAME \
+			| RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+			| RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+			| RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_TSO \
+			| RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+			| RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define SFP_EEPROM_SIZE 0x100
 
@@ -489,7 +489,7 @@ atl_dev_start(struct rte_eth_dev *dev)
 	/* set adapter started */
 	hw->adapter_stopped = 0;
 
-	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR,
 		"Invalid link_speeds for port %u, fix speed not supported",
 				dev->data->port_id);
@@ -656,18 +656,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
 	uint32_t link_speeds = dev->data->dev_conf.link_speeds;
 	uint32_t speed_mask = 0;
 
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed_mask = hw->aq_nic_cfg->link_speed_msk;
 	} else {
-		if (link_speeds & ETH_LINK_SPEED_10G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed_mask |= AQ_NIC_RATE_10G;
-		if (link_speeds & ETH_LINK_SPEED_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed_mask |= AQ_NIC_RATE_5G;
-		if (link_speeds & ETH_LINK_SPEED_1G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed_mask |= AQ_NIC_RATE_1G;
-		if (link_speeds & ETH_LINK_SPEED_2_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed_mask |=  AQ_NIC_RATE_2G5;
-		if (link_speeds & ETH_LINK_SPEED_100M)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed_mask |= AQ_NIC_RATE_100M;
 	}
 
@@ -1128,10 +1128,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
 	dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
-	dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 
 	return 0;
 }
@@ -1176,10 +1176,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 	u32 fc = AQ_NIC_FC_OFF;
 	int err = 0;
 
-	link.link_status = ETH_LINK_DOWN;
+	link.link_status = RTE_ETH_LINK_DOWN;
 	link.link_speed = 0;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 	memset(&old, 0, sizeof(old));
 
 	/* load old link status */
@@ -1199,8 +1199,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 		return 0;
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = hw->aq_link_status.mbps;
 
 	rte_eth_linkstatus_set(dev, &link);
@@ -1334,7 +1334,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1533,13 +1533,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->aq_fw_ops->get_flow_control(hw, &fc);
 
 	if (fc == AQ_NIC_FC_OFF)
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (fc & AQ_NIC_FC_RX)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (fc & AQ_NIC_FC_TX)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 
 	return 0;
 }
@@ -1554,13 +1554,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	if (hw->aq_fw_ops->set_flow_control == NULL)
 		return -ENOTSUP;
 
-	if (fc_conf->mode == RTE_FC_NONE)
+	if (fc_conf->mode == RTE_ETH_FC_NONE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
-	else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
-	else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
-	else if (fc_conf->mode == RTE_FC_FULL)
+	else if (fc_conf->mode == RTE_ETH_FC_FULL)
 		hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
 
 	if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1731,14 +1731,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+	ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
 
-	cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+	cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++)
 		hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
 
-	if (mask & ETH_VLAN_EXTEND_MASK)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK)
 		ret = -ENOTSUP;
 
 	return ret;
@@ -1754,10 +1754,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 	PMD_INIT_FUNC_TRACE();
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
 		break;
 	default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c97..da993be35faa 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
 #include "hw_atl/hw_atl_utils.h"
 
 #define ATL_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define ATL_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306ec..ddf110d6ce7e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 
 	rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		(DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		(RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
 
 	/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff5b..e870ced7e992 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -2011,9 +2011,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
 	/* Setup required number of queues */
 	_avp_set_queue_counts(eth_dev);
 
-	mask = (ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+	mask = (RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 	ret = avp_vlan_offload_set(eth_dev, mask);
 	if (ret < 0) {
 		PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2153,8 +2153,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
-	link->link_speed = ETH_SPEED_NUM_10G;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = RTE_ETH_SPEED_NUM_10G;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_status = !!(avp->flags & AVP_F_LINKUP);
 
 	return -1;
@@ -2204,8 +2204,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
 	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
 	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	}
 
 	return 0;
@@ -2218,9 +2218,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 	uint64_t offloads = dev_conf->rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-			if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
 			else
 				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2229,13 +2229,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
 	}
 
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index 786288a7b079..c0f033e06b15 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
 	pdata->rss_hf = rss_conf->rss_hf;
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 }
 
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af11f..d4ba06c43a61 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
 	struct axgbe_port *pdata =  dev->data->dev_private;
 	/* Checksum offload to hardware */
 	pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_CHECKSUM;
+				RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return 0;
 }
 
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
 {
 	struct axgbe_port *pdata = dev->data->dev_private;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		pdata->rss_enable = 1;
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		pdata->rss_enable = 0;
 	else
 		return  -1;
@@ -383,7 +383,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
 
 	rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
 	rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 				max_pkt_len > pdata->rx_buf_size)
 		dev_data->scattered_rx = 1;
 
@@ -588,13 +588,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 
 	pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
 
-	if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 
 	/* Set the RSS options */
@@ -763,7 +763,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
 	link.link_status = pdata->phy_link;
 	link.link_speed = pdata->phy_speed;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
 		PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1206,25 +1206,25 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
 	dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
 	dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
-	dev_info->speed_capa =  ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM  |
-		DEV_RX_OFFLOAD_JUMBO_FRAME	|
-		DEV_RX_OFFLOAD_SCATTER	  |
-		DEV_RX_OFFLOAD_KEEP_CRC;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME	|
+		RTE_ETH_RX_OFFLOAD_SCATTER	  |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (pdata->hw_feat.rss) {
 		dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1261,13 +1261,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	fc.autoneg = pdata->pause_autoneg;
 
 	if (pdata->rx_pause && pdata->tx_pause)
-		fc.mode = RTE_FC_FULL;
+		fc.mode = RTE_ETH_FC_FULL;
 	else if (pdata->rx_pause)
-		fc.mode = RTE_FC_RX_PAUSE;
+		fc.mode = RTE_ETH_FC_RX_PAUSE;
 	else if (pdata->tx_pause)
-		fc.mode = RTE_FC_TX_PAUSE;
+		fc.mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc.mode = RTE_FC_NONE;
+		fc.mode = RTE_ETH_FC_NONE;
 
 	fc_conf->high_water =  (1024 + (fc.low_water[0] << 9)) / 1024;
 	fc_conf->low_water =  (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1297,13 +1297,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	AXGMAC_IOWRITE(pdata, reg, reg_val);
 	fc.mode = fc_conf->mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 	} else {
@@ -1385,15 +1385,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	fc.mode = pfc_conf->fc.mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1492,11 +1492,11 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	}
 	if (frame_size > AXGBE_ETH_MAX_LEN) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		val = 1;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		val = 0;
 	}
 	AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
@@ -1842,8 +1842,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+	case RTE_ETH_VLAN_TYPE_INNER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
 		if (qinq) {
 			if (tpid != 0x8100 && tpid != 0x88a8)
 				PMD_DRV_LOG(ERR,
@@ -1860,8 +1860,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    "Inner type not supported in single tag\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+	case RTE_ETH_VLAN_TYPE_OUTER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
 		if (qinq) {
 			PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
 			/*Enable outer VLAN tag*/
@@ -1878,11 +1878,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 					    "tag supported 0x8100/0x88A8\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_MAX:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+	case RTE_ETH_VLAN_TYPE_MAX:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
 		break;
-	case ETH_VLAN_TYPE_UNKNOWN:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+	case RTE_ETH_VLAN_TYPE_UNKNOWN:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
 		break;
 	}
 	return 0;
@@ -1916,8 +1916,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1927,8 +1927,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_stripping(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1938,14 +1938,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_filtering(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
 			axgbe_vlan_extend_enable(pdata);
 			/* Set global registers with default ethertype*/
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					    RTE_ETHER_TYPE_VLAN);
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					    RTE_ETHER_TYPE_VLAN);
 		} else {
 			PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
 
 /* Receive Side Scaling */
 #define AXGBE_RSS_OFFLOAD  ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define AXGBE_RSS_HASH_KEY_SIZE		40
 #define AXGBE_RSS_MAX_TABLE_SIZE	256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
 		pdata->an_int = 0;
 		axgbe_an73_clear_interrupts(pdata);
 		pdata->eth_dev->data->dev_link.link_status =
-			ETH_LINK_DOWN;
+			RTE_ETH_LINK_DOWN;
 	} else if (pdata->an_state == AXGBE_AN_ERROR) {
 		PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
 			    cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index 33f709a6bb02..baa17a5fb43f 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		(DMA_CH_INC * rxq->queue_id));
 	rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
 						  DMA_CH_RDTR_LO);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..14d91f868cd8 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
 	link.link_speed = sc->link_vars.line_speed;
 	switch (sc->link_vars.duplex) {
 		case DUPLEX_FULL:
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			break;
 		case DUPLEX_HALF:
-			link.link_duplex = ETH_LINK_HALF_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 			break;
 	}
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+		 RTE_ETH_LINK_SPEED_FIXED);
 	link.link_status = sc->link_vars.link_up;
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -181,7 +181,7 @@ bnx2x_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE(sc);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
 		dev->data->mtu = sc->mtu;
 	}
@@ -412,7 +412,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
 	if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
 		PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
 				"VF device is no longer operational");
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	}
 
 	return ret;
@@ -538,8 +538,8 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = BNX2X_MAX_RX_PKT_LEN;
 	dev_info->max_mac_addrs  = BNX2X_MAX_MAC_ADDRS;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
 	dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -675,7 +675,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
 	bnx2x_load_firmware(sc);
 	assert(sc->firmware);
 
-	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		sc->udp_rss = 1;
 
 	sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 494a1eff3700..7e313c2fb5af 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,40 +569,40 @@ struct bnxt_rep_info {
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
 #define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP |	\
-	ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
-				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_VLAN_STRIP | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RSS_IPV4 |		\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |	\
+	RTE_ETH_RSS_IPV6 |		\
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP |	\
+	RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+				     RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+				     RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+				     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+				     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+				     RTE_ETH_RX_OFFLOAD_SCATTER | \
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de34a2f0bb2d..99d4953305e3 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 		goto err_out;
 
 	/* Alloc RSS context only if RSS mode is enabled */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int j, nr_ctxs = bnxt_rss_ctxts(bp);
 
 		/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	 * setting is not available at this time, it will not be
 	 * configured correctly in the CFA.
 	 */
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
 
 	rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
-				    (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+				    (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
 				    true : false);
 	if (rc)
 		goto err_out;
@@ -738,11 +738,11 @@ static int bnxt_start_nic(struct bnxt *bp)
 
 	if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
 		bp->eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		bp->flags |= BNXT_FLAG_JUMBO;
 	} else {
 		bp->eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		bp->flags &= ~BNXT_FLAG_JUMBO;
 	}
 
@@ -908,35 +908,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
 		link_speed = bp->link_info->support_pam4_speeds;
 
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
-		speed_capa |= ETH_LINK_SPEED_2_5G;
+		speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
-		speed_capa |= ETH_LINK_SPEED_20G;
+		speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	if (bp->link_info->auto_mode ==
 	    HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -980,8 +980,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
 				    dev_info->tx_queue_offload_capa;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
@@ -1030,8 +1030,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	 */
 
 	/* VMDq resources */
-	vpool = 64; /* ETH_64_POOLS */
-	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	vpool = 64; /* RTE_ETH_64_POOLS */
+	vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
 	for (i = 0; i < 4; vpool >>= 1, i++) {
 		if (max_vnics > vpool) {
 			for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1126,18 +1126,18 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	    (uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
 		goto resource_error;
 
-	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
 	    bp->max_vnics < eth_dev->data->nb_rx_queues)
 		goto resource_error;
 
 	bp->rx_cp_nr_rings = bp->rx_nr_rings;
 	bp->tx_cp_nr_rings = bp->tx_nr_rings;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		eth_dev->data->mtu =
 			eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
 			RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
@@ -1168,7 +1168,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 		PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
 			eth_dev->data->port_id,
 			(uint32_t)link->link_speed,
-			(link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			("full-duplex") : ("half-duplex\n"));
 	else
 		PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1184,10 +1184,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
 	uint16_t buf_size;
 	int i;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return 1;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		return 1;
 
 	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1232,16 +1232,16 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 	 * a limited subset have been enabled.
 	 */
 	if (eth_dev->data->dev_conf.rxmode.offloads &
-		~(DEV_RX_OFFLOAD_VLAN_STRIP |
-		  DEV_RX_OFFLOAD_KEEP_CRC |
-		  DEV_RX_OFFLOAD_JUMBO_FRAME |
-		  DEV_RX_OFFLOAD_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_TCP_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_RSS_HASH |
-		  DEV_RX_OFFLOAD_VLAN_FILTER))
+		~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		  RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		  RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		  RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
 		goto use_scalar_rx;
 
 #if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1293,7 +1293,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
 	 * or tx offloads.
 	 */
 	if (eth_dev->data->scattered_rx ||
-	    (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+	    (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
 	    BNXT_TRUFLOW_EN(bp))
 		goto use_scalar_tx;
 
@@ -1594,10 +1594,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 
 	bnxt_link_update_op(eth_dev, 1);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		vlan_mask |= ETH_VLAN_FILTER_MASK;
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		vlan_mask |= ETH_VLAN_STRIP_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
 	rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
 	if (rc)
 		goto error;
@@ -1819,8 +1819,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		/* Retrieve link info from hardware */
 		rc = bnxt_get_hwrm_link_config(bp, &new);
 		if (rc) {
-			new.link_speed = ETH_LINK_SPEED_100M;
-			new.link_duplex = ETH_LINK_FULL_DUPLEX;
+			new.link_speed = RTE_ETH_LINK_SPEED_100M;
+			new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR,
 				"Failed to retrieve link rc = 0x%x!\n", rc);
 			goto out;
@@ -2014,7 +2014,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	if (!vnic->rss_table)
 		return -EINVAL;
 
-	if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return -EINVAL;
 
 	if (reta_size != tbl_size) {
@@ -2120,7 +2120,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	 * If RSS enablement were different than dev_configure,
 	 * then return -EINVAL
 	 */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (!rss_conf->rss_hf)
 			PMD_DRV_LOG(ERR, "Hash type NONE\n");
 	} else {
@@ -2138,7 +2138,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
 	vnic->hash_mode =
 		bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
-					    ETH_RSS_LEVEL(rss_conf->rss_hf));
+					    RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
 
 	/*
 	 * If hashkey is not specified, use the previously configured
@@ -2183,30 +2183,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
 		hash_types = vnic->hash_type;
 		rss_conf->rss_hf = 0;
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_IPV4;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_IPV6;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 		}
@@ -2246,17 +2246,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
 		fc_conf->autoneg = 1;
 	switch (bp->link_info->pause) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
 			HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	}
 	return 0;
@@ -2279,11 +2279,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		bp->link_info->auto_pause = 0;
 		bp->link_info->force_pause = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2294,7 +2294,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
 		}
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2305,7 +2305,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
 		}
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2336,7 +2336,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2351,7 +2351,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 			HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
 		bp->vxlan_port_cnt++;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2389,7 +2389,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (!bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2406,7 +2406,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 			HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
 		port = bp->vxlan_fw_dst_port_id;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (!bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2584,7 +2584,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 	int rc;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
-	if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		/* Remove any VLAN filters programmed */
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
@@ -2604,7 +2604,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 		bnxt_add_vlan_filter(bp, 0);
 	}
 	PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
 
 	return 0;
 }
@@ -2617,7 +2617,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 
 	/* Destroy vnic filters and vnic */
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
 	}
@@ -2656,7 +2656,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		rc = bnxt_add_vlan_filter(bp, 0);
 		if (rc)
 			return rc;
@@ -2674,7 +2674,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
 
 	return rc;
 }
@@ -2694,22 +2694,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
 	if (!dev->data->dev_started)
 		return 0;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering */
 		rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
 		rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
 		else
 			PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2724,10 +2724,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 {
 	struct bnxt *bp = dev->data->dev_private;
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
-	if (vlan_type != ETH_VLAN_TYPE_INNER &&
-	    vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	    vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -2739,7 +2739,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		return -EINVAL;
 	}
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		switch (tpid) {
 		case RTE_ETHER_TYPE_QINQ:
 			bp->outer_tpid_bd =
@@ -2767,7 +2767,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		}
 		bp->outer_tpid_bd |= tpid;
 		PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer vlan in QinQ\n");
 		return -EINVAL;
@@ -2807,7 +2807,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
 	bnxt_del_dflt_mac_filter(bp, vnic);
 
 	memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		/* This filter will allow only untagged packets */
 		rc = bnxt_add_vlan_filter(bp, 0);
 	} else {
@@ -3029,10 +3029,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
 	if (new_mtu > RTE_ETHER_MTU) {
 		bp->flags |= BNXT_FLAG_JUMBO;
 		bp->eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	} else {
 		bp->eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		bp->flags &= ~BNXT_FLAG_JUMBO;
 	}
 
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 59489b591a6f..98e1107f629c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -974,7 +974,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -1157,7 +1157,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
 
 		rxq = bp->rx_queues[act_q->index];
 
-		if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+		if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
 		    vnic->fw_vnic_id != INVALID_HW_RING_ID)
 			goto use_vnic;
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index f29d57423585..0d9dda0c362c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	uint16_t j = dst_id - 1;
 
 	//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
-	if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+	if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
 	    conf->pool_map[j].pools & (1UL << j)) {
 		PMD_DRV_LOG(DEBUG,
 			"Add vlan %u to vmdq pool %u\n",
@@ -2955,12 +2955,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
 {
 	uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
-	if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+	if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
 	switch (conf_link_speed) {
-	case ETH_LINK_SPEED_10M_HD:
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
 	}
@@ -2977,51 +2977,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 {
 	uint16_t eth_link_speed = 0;
 
-	if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
-		return ETH_LINK_SPEED_AUTONEG;
+	if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+		return RTE_ETH_LINK_SPEED_AUTONEG;
 
-	switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_100M:
-	case ETH_LINK_SPEED_100M_HD:
+	switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
 		break;
-	case ETH_LINK_SPEED_2_5G:
+	case RTE_ETH_LINK_SPEED_2_5G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
 		break;
-	case ETH_LINK_SPEED_20G:
+	case RTE_ETH_LINK_SPEED_20G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 		break;
@@ -3034,11 +3034,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 	return eth_link_speed;
 }
 
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
-		ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
-		ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
-		ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
-		ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+		RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+		RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+		RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+		RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
 
 static int bnxt_validate_link_speed(struct bnxt *bp)
 {
@@ -3047,13 +3047,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
 	uint32_t link_speed_capa;
 	uint32_t one_speed;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG)
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
 		return 0;
 
 	link_speed_capa = bnxt_get_speed_capabilities(bp);
 
-	if (link_speed & ETH_LINK_SPEED_FIXED) {
-		one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+	if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+		one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
 
 		if (one_speed & (one_speed - 1)) {
 			PMD_DRV_LOG(ERR,
@@ -3083,71 +3083,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
 {
 	uint16_t ret = 0;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
 		if (bp->link_info->support_speeds)
 			return bp->link_info->support_speeds;
 		link_speed = BNXT_SUPPORTED_SPEEDS;
 	}
 
-	if (link_speed & ETH_LINK_SPEED_100M)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_100M_HD)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_1G)
+	if (link_speed & RTE_ETH_LINK_SPEED_1G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
-	if (link_speed & ETH_LINK_SPEED_2_5G)
+	if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
-	if (link_speed & ETH_LINK_SPEED_10G)
+	if (link_speed & RTE_ETH_LINK_SPEED_10G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
-	if (link_speed & ETH_LINK_SPEED_20G)
+	if (link_speed & RTE_ETH_LINK_SPEED_20G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
-	if (link_speed & ETH_LINK_SPEED_25G)
+	if (link_speed & RTE_ETH_LINK_SPEED_25G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
-	if (link_speed & ETH_LINK_SPEED_40G)
+	if (link_speed & RTE_ETH_LINK_SPEED_40G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
-	if (link_speed & ETH_LINK_SPEED_50G)
+	if (link_speed & RTE_ETH_LINK_SPEED_50G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
-	if (link_speed & ETH_LINK_SPEED_100G)
+	if (link_speed & RTE_ETH_LINK_SPEED_100G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
-	if (link_speed & ETH_LINK_SPEED_200G)
+	if (link_speed & RTE_ETH_LINK_SPEED_200G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 	return ret;
 }
 
 static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 {
-	uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+	uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	switch (hw_link_speed) {
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
-		eth_link_speed = ETH_SPEED_NUM_100M;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
-		eth_link_speed = ETH_SPEED_NUM_1G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
-		eth_link_speed = ETH_SPEED_NUM_2_5G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
-		eth_link_speed = ETH_SPEED_NUM_10G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
-		eth_link_speed = ETH_SPEED_NUM_20G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
-		eth_link_speed = ETH_SPEED_NUM_25G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
-		eth_link_speed = ETH_SPEED_NUM_40G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
-		eth_link_speed = ETH_SPEED_NUM_50G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
-		eth_link_speed = ETH_SPEED_NUM_100G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
-		eth_link_speed = ETH_SPEED_NUM_200G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
 	default:
@@ -3160,16 +3160,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 
 static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
 {
-	uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+	uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (hw_link_duplex) {
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
 		/* FALLTHROUGH */
-		eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
-		eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3198,12 +3198,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
 		link->link_speed =
 			bnxt_parse_hw_link_speed(link_info->link_speed);
 	else
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 	link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
 	link->link_status = link_info->link_up;
 	link->link_autoneg = link_info->auto_mode ==
 		HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
-		ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+		RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 exit:
 	return rc;
 }
@@ -3229,7 +3229,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
 
 	autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
 	if (BNXT_CHIP_P5(bp) &&
-	    dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+	    dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
 		/* 40G is not supported as part of media auto detect.
 		 * The speed should be forced and autoneg disabled
 		 * to configure 40G speed.
@@ -3320,7 +3320,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	HWRM_CHECK_RESULT();
 
-	bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+	bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
 
 	svif_info = rte_le_to_cpu_16(resp->svif_info);
 	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index bdbad53b7d7f..a9f5e13476b0 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -536,7 +536,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 957b175f1b89..632a611bf612 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -185,7 +185,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 	int tpa_info_start = ag_bitmap_start + ag_bitmap_len;
 	int tpa_info_len = 0;
 
-	if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		int tpa_max = BNXT_TPA_MAX_AGGS(bp);
 
 		tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -278,7 +278,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 				    ag_bitmap_start, ag_bitmap_len);
 
 		/* TPA info */
-		if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 			rx_ring_info->tpa_info =
 				((struct bnxt_tpa_info *)((char *)mz->addr +
 							  tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index bbcb3b06e7df..0ac3a2b3b7d3 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -41,13 +41,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 	bp->nr_vnics = 0;
 
 	/* Multi-queue mode */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 		/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
 
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_RSS:
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* FALLTHROUGH */
 			/* ETH_8/64_POOLs */
 			pools = conf->nb_queue_pools;
@@ -55,14 +55,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 			max_pools = RTE_MIN(bp->max_vnics,
 					    RTE_MIN(bp->max_l2_ctx,
 					    RTE_MIN(bp->max_rsscos_ctx,
-						    ETH_64_POOLS)));
+						    RTE_ETH_64_POOLS)));
 			PMD_DRV_LOG(DEBUG,
 				    "pools = %u max_pools = %u\n",
 				    pools, max_pools);
 			if (pools > max_pools)
 				pools = max_pools;
 			break;
-		case ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_RSS:
 			pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
 			break;
 		default:
@@ -100,7 +100,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 				    ring_idx, rxq, i, vnic);
 		}
 		if (i == 0) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
 				bp->eth_dev->data->promiscuous = 1;
 				vnic->flags |= BNXT_VNIC_INFO_PROMISC;
 			}
@@ -110,8 +110,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 		vnic->end_grp_id = end_grp_id;
 
 		if (i) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
-			    !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+			    !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
 				vnic->rss_dflt_cr = true;
 			goto skip_filter_allocation;
 		}
@@ -136,14 +136,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 
 	bp->rx_num_qs_per_vnic = nb_q_per_grp;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
 
 		if (bp->flags & BNXT_FLAG_UPDATE_HASH)
 			bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
 
 		for (i = 0; i < bp->nr_vnics; i++) {
-			uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+			uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
 
 			vnic = &bp->vnic_info[i];
 			vnic->hash_type =
@@ -338,7 +338,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
 	rxq->queue_id = queue_idx;
 	rxq->port_id = eth_dev->data->port_id;
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -454,7 +454,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
 
 		if (BNXT_HAS_RING_GRPS(bp)) {
@@ -525,7 +525,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq->rx_started = false;
 	PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (BNXT_HAS_RING_GRPS(bp))
 			vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
 
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 73fbdd17d126..0909bab89b76 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 	dev_conf = &rxq->bp->eth_dev->data->dev_conf;
 	offloads = dev_conf->rxmode.offloads;
 
-	outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+	outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
 
 	/* Initialize ol_flags table. */
 	pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 47824334ae3e..401dd83f4e7d 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -350,7 +350,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -476,7 +476,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 {
 	uint16_t hwrm_type = 0;
 
-	if (rte_type & ETH_RSS_IPV4)
+	if (rte_type & RTE_ETH_RSS_IPV4)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
-	if (rte_type & ETH_RSS_IPV6)
+	if (rte_type & RTE_ETH_RSS_IPV6)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 
 	return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
 {
 	uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
-	bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
-	bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP));
+	bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+	bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP));
 	bool l3_only = l3 && !l4;
 	bool l3_and_l4 = l3 && l4;
 
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
 	 * return default hash mode.
 	 */
 	if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
-		return ETH_RSS_LEVEL_PMD_DEFAULT;
+		return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
 	    mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 	else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
 		 mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_INNERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
 	else
-		rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+		rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	return rss_level;
 }
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
 	if (vf >= bp->pdev->max_vfs)
 		return -EINVAL;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
 		PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
 		return -ENOTSUP;
 	}
 
 	/* Is this really the correct mapping?  VFd seems to think it is. */
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		flag |= BNXT_VNIC_INFO_PROMISC;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		flag |= BNXT_VNIC_INFO_BCAST;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
 
 	if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..fcf878b9b858 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,7 +167,7 @@ struct bond_dev_private {
 	struct rte_eth_desc_lim tx_desc_lim;	/**< Tx descriptor limits */
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
 			RTE_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[52];				/**< 52-byte hash key buffer. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 128754f4595a..20adfcf0ea9c 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
 	uint16_t key_speed;
 
 	switch (speed) {
-	case ETH_SPEED_NUM_NONE:
+	case RTE_ETH_SPEED_NUM_NONE:
 		key_speed = 0x00;
 		break;
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		key_speed = BOND_LINK_SPEED_KEY_10M;
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		key_speed = BOND_LINK_SPEED_KEY_100M;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		key_speed = BOND_LINK_SPEED_KEY_1000M;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		key_speed = BOND_LINK_SPEED_KEY_10G;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		key_speed = BOND_LINK_SPEED_KEY_20G;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		key_speed = BOND_LINK_SPEED_KEY_40G;
 		break;
 	default:
@@ -866,7 +866,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
 
 		if (ret >= 0 && link_info.link_status != 0) {
 			key = link_speed_key(link_info.link_speed) << 1;
-			if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+			if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
 				key |= BOND_LINK_FULL_DUPLEX_KEY;
 		} else {
 			key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index eb8d15d16034..f12060bcafb0 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
 		return 0;
 
 	internals = bonded_eth_dev->data->dev_private;
@@ -586,7 +586,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 			return -1;
 		}
 
-		 if (link_props.link_status == ETH_LINK_UP) {
+		 if (link_props.link_status == RTE_ETH_LINK_UP) {
 			if (internals->active_slave_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
@@ -721,7 +721,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
 		internals->tx_queue_offload_capa = 0;
-		internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+		internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 		internals->reta_size = 0;
 		internals->candidate_max_rx_pktlen = 0;
 		internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index a6755661c49c..2482bb1cbc02 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1373,8 +1373,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 		 * In any other mode the link properties are set to default
 		 * values of AUTONEG/DUPLEX
 		 */
-		ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
-		ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+		ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	}
 }
 
@@ -1704,7 +1704,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
 	/* If RSS is enabled for bonding, try to enable it for slaves  */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (internals->rss_key_len != 0) {
 			slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
@@ -1721,23 +1721,23 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		slave_eth_dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_VLAN_FILTER;
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	else
 		slave_eth_dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_VLAN_FILTER;
+				~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
 			bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
 
 	if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_JUMBO_FRAME)
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		slave_eth_dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_JUMBO_FRAME;
+				RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		slave_eth_dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+				~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
 	nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
@@ -1838,7 +1838,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* If RSS is enabled for bonding, synchronize RETA */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int i;
 		struct bond_dev_private *internals;
 
@@ -1961,7 +1961,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 1;
 
 	internals = eth_dev->data->dev_private;
@@ -2101,7 +2101,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 			tlb_last_obytets[internals->active_slaves[i]] = 0;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
@@ -2423,15 +2423,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 
 	bond_ctx = ethdev->data->dev_private;
 
-	ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
 			bond_ctx->active_slave_count == 0) {
-		ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 
-	ethdev->data->dev_link.link_status = ETH_LINK_UP;
+	ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	if (wait_to_complete)
 		link_update = rte_eth_link_get;
@@ -2456,7 +2456,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 					  &slave_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
-					ETH_SPEED_NUM_NONE;
+					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
 					"Slave (port %u) link get failed: %s",
 					bond_ctx->active_slaves[idx],
@@ -2498,7 +2498,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 		 * In theses mode the maximum theoretical link speed is the sum
 		 * of all the slaves
 		 */
-		ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
 		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2872,7 +2872,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
-		if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
 					     "for slave %d in bonding mode %d",
@@ -2888,7 +2888,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		if (internals->active_slave_count < 1) {
 			/* If first active slave, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
-								ETH_LINK_UP;
+								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
@@ -3279,7 +3279,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->max_rx_pktlen = 0;
 
 	/* Initially allow to choose any offload type */
-	internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+	internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 
 	memset(&internals->default_rxconf, 0,
 	       sizeof(internals->default_rxconf));
@@ -3508,7 +3508,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	 * set key to the the value specified in port RSS configuration.
 	 * Fall back to default RSS key if the key is not specified
 	 */
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) {
 			internals->rss_key_len =
 				dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len;
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6cf14c8..9a09748673b2 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,22 +15,22 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
@@ -69,36 +69,36 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index 69e767ac3dd6..e3b1bd8ad225 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -76,12 +76,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index 0e1276c60ba2..f63b8fabefd4 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -77,11 +77,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 115e678916bb..9ff2d3dc114a 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,22 +15,22 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
@@ -69,36 +69,36 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
@@ -277,9 +277,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
 
 	/* Platform specific checks */
 	if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	     (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	     (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		plt_err("Outer IP and SCTP checksum unsupported");
 		return -EINVAL;
 	}
@@ -530,17 +530,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	 * TSO not supported for earlier chip revisions
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
-		dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-					  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 
 	/* 50G and 100G to be supported for board version C0
 	 * and above of CN9K.
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
 	}
 
 	dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 7d9f1bd61f79..08ee28658bce 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -76,12 +76,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index 763f9a14fd79..f35ae8e70438 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -76,11 +76,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 0e3652ed5109..f6b75645bb69 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
 
 	if (roc_nix_is_vf_or_sdp(&dev->nix) ||
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 	uint32_t speed_capa;
 
 	/* Auto negotiation disabled */
-	speed_capa = ETH_LINK_SPEED_FIXED;
+	speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
-		speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			      ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			      ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			      RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			      RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return speed_capa;
@@ -54,8 +54,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 }
 
@@ -90,7 +90,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	struct rte_eth_fc_conf fc_conf = {0};
 	int rc;
 
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -98,10 +98,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
@@ -122,11 +122,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (roc_model_is_cn96_ax() &&
 	    dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
-	    (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_cfg.mode =
-				(fc_cfg.mode == RTE_FC_FULL ||
-				fc_cfg.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_cfg.mode == RTE_ETH_FC_FULL ||
+				fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -169,7 +169,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -361,7 +361,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
 		rc = cnxk_nix_tsc_convert(dev);
 		if (rc) {
 			plt_err("Failed to calculate delta and freq mult");
@@ -434,24 +434,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->ethdev_rss_hf = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -460,34 +460,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -513,7 +513,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
 	uint64_t rss_hf;
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 
@@ -729,8 +729,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
 
 	/* Nothing much to do if offload is not enabled */
 	if (!(dev->tx_offloads &
-	      (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	       DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+	      (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	       RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
 		return 0;
 
 	/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -778,13 +778,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
@@ -814,7 +814,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	/* Prepare rx cfg */
 	rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
 	}
@@ -1191,12 +1191,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled on PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
 		cnxk_eth_dev_ops.timesync_enable(eth_dev);
 	else
 		cnxk_eth_dev_ops.timesync_disable(eth_dev);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		rc = rte_mbuf_dyn_rx_timestamp_register
 			(&dev->tstamp.tstamp_dynfield_offset,
 			 &dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 2528b3cdaa0c..53a657f8865d 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -54,41 +54,44 @@
 	 CNXK_NIX_TX_NB_SEG_MAX)
 
 #define CNXK_NIX_RSS_L3_L4_SRC_DST                                             \
-	(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY |     \
-	 ETH_RSS_L4_DST_ONLY)
+	(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |                   \
+	 RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 #define CNXK_NIX_RSS_OFFLOAD                                                   \
-	(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP |               \
-	 ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD |                  \
-	 CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+	(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |                 \
+	 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL |             \
+	 RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST |                 \
+	 RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
 
 #define CNXK_NIX_TX_OFFLOAD_CAPA                                               \
-	(DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE |          \
-	 DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT |             \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |                 \
-	 DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \
-	 DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \
-	 DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \
-	 DEV_TX_OFFLOAD_IPV4_CKSUM)
+	(RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |          \
+	 RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT |             \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |                 \
+	 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO |                  \
+	 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |        \
+	 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS |              \
+	 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 
 #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \
-	(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \
-	 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \
-	 DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |         \
-	 DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP |                  \
-	 DEV_RX_OFFLOAD_VLAN_STRIP)
+	(RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |                 \
+	 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER |            \
+	 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |         \
+	 RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_TIMESTAMP |                  \
+	 RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 
 #define RSS_IPV4_ENABLE                                                        \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE                                                        \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP |         \
-	 ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE                                                     \
-	(ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+	(RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS 3
 
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 37720fb0954e..bf0c6d6b4ad8 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -49,11 +49,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
 		val = ROC_NIX_RSS_RETA_SZ_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
 		val = ROC_NIX_RSS_RETA_SZ_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
 		val = ROC_NIX_RSS_RETA_SZ_256;
 	else
 		val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..fa6b8aa4f0c5 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,25 +81,25 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-		{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
-		{DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
-		{DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
-		{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
-		{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
-		{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
-		{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
-		{DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
-		{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-		{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-		{DEV_RX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
-		{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
-		{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+		{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+		{RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+		{RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+		{RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
+		{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+		{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+		{RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+		{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
 						 "Scalar, Rx Offloads:"
@@ -143,28 +143,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-		{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-		{DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
-		{DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
-		{DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
-		{DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
-		{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
-		{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
-		{DEV_TX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+		{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+		{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+		{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+		{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+		{RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
 						 "Scalar, Tx Offloads:"
@@ -204,8 +204,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 	enum rte_eth_fc_mode mode_map[] = {
-					   RTE_FC_NONE, RTE_FC_RX_PAUSE,
-					   RTE_FC_TX_PAUSE, RTE_FC_FULL
+					   RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+					   RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
 					  };
 	struct roc_nix *nix = &dev->nix;
 	int mode;
@@ -265,10 +265,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -409,13 +409,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		plt_err("Scatter offload is not enabled for mtu");
 		goto exit;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
 		plt_err("Greater than maximum supported packet length");
 		goto exit;
@@ -443,9 +443,9 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	frame_size += RTE_ETHER_CRC_LEN;
 
 	if (frame_size > RTE_ETHER_MAX_LEN)
-		dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* Update max_rx_pkt_len */
 	data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -816,7 +816,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 	if (rss_conf->rss_key)
 		roc_nix_rss_key_set(nix, rss_conf->rss_key);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 3fdbdba49549..1cff8d56e65b 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		plt_info("Port %d: Link Up - speed %u Mbps - %s",
 			 (int)(eth_dev->data->port_id),
 			 (uint32_t)link->link_speed,
-			 link->link_duplex == ETH_LINK_FULL_DUPLEX
+			 link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 				 ? "full-duplex"
 				 : "half-duplex");
 	else
@@ -66,7 +66,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
 
 	eth_link.link_status = link->status;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	/* Print link info */
@@ -94,17 +94,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		return 0;
 
 	if (roc_nix_is_lbk(&dev->nix)) {
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_100G;
-		link.link_autoneg = ETH_LINK_FIXED;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		rc = roc_nix_mac_link_info_get(&dev->nix, &info);
 		if (rc)
 			return rc;
 		link.link_status = info.status;
 		link.link_speed = info.speed;
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 		if (info.full_duplex)
 			link.link_duplex = info.full_duplex;
 	}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 	dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	rc = roc_nix_ptp_rx_ena_dis(nix, true);
 	if (!rc) {
@@ -257,7 +257,7 @@ int
 cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
-	uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+	uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	struct roc_nix *nix = &dev->nix;
 	int rc = 0;
 
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index 32c1b5dee5fa..ecdfee7b11a6 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("multi-queue mode is disabled");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..dee618a0db5f 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,32 +28,32 @@
 #define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
 
 #define CXGBE_DEFAULT_RSS_KEY_LEN     40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-				ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
-				ETH_RSS_NONFRAG_IPV6_OTHER | \
-				ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
-				    ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
-				    ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+				RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+				    RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+				    RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
 
 /* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
-			   DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_TX_OFFLOAD_UDP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_TSO | \
-			   DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			   DEV_RX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_RX_OFFLOAD_UDP_CKSUM | \
-			   DEV_RX_OFFLOAD_TCP_CKSUM | \
-			   DEV_RX_OFFLOAD_JUMBO_FRAME | \
-			   DEV_RX_OFFLOAD_SCATTER | \
-			   DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+			   RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+			   RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+			   RTE_ETH_RX_OFFLOAD_SCATTER | \
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 /* Devargs filtermode and filtermask representation */
 enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca397600..4b5ab6f62971 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
 	}
 
 	new_link.link_status = cxgbe_force_linkup(adapter) ?
-			       ETH_LINK_UP : pi->link_cfg.link_ok;
+			       RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
 	new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -316,10 +316,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	/* set to jumbo mode if needed */
 	if (new_mtu > CXGBE_ETH_MAX_LEN)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
 			    -1, -1, true);
@@ -396,7 +396,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
 			goto out;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 	else
 		eth_dev->data->scattered_rx = 0;
@@ -460,9 +460,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
 
 	CXGBE_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!(adapter->flags & FW_QUEUE_BOUND)) {
 		err = cxgbe_setup_sge_fwevtq(adapter);
@@ -685,10 +685,10 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 	/* Set to jumbo mode if necessary */
 	if (pkt_len > CXGBE_ETH_MAX_LEN)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
 			       &rxq->fl, NULL,
@@ -1079,13 +1079,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		rx_pause = 1;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1098,12 +1098,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	u8 tx_pause = 0, rx_pause = 0;
 	int ret;
 
-	if (fc_conf->mode == RTE_FC_FULL) {
+	if (fc_conf->mode == RTE_ETH_FC_FULL) {
 		tx_pause = 1;
 		rx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
 		tx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
 		rx_pause = 1;
 	}
 
@@ -1199,9 +1199,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 		rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	}
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1478,7 +1478,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_100G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
 		}
@@ -1487,7 +1487,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_50G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
 		}
@@ -1496,7 +1496,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_25G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..54723edc2144 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1671,7 +1671,7 @@ int cxgbe_link_start(struct port_info *pi)
 	 * that step explicitly.
 	 */
 	ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
-			    !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+			    !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
 			    true);
 	if (ret == 0) {
 		ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1695,7 +1695,7 @@ int cxgbe_link_start(struct port_info *pi)
 	}
 
 	if (ret == 0 && cxgbe_force_linkup(adapter))
-		pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return ret;
 }
 
@@ -1726,10 +1726,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
 	if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
 			 F_FW_RSS_VI_CONFIG_CMD_UDPEN;
 
@@ -1866,7 +1866,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
 {
 #define SET_SPEED(__speed_name) \
 	do { \
-		*speed_caps |= ETH_LINK_ ## __speed_name; \
+		*speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
 	} while (0)
 
 #define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1953,7 +1953,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
 			      speed_caps);
 
 	if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
-		*speed_caps |= ETH_LINK_SPEED_FIXED;
+		*speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
 }
 
 /**
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..eddb818c4861 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -366,7 +366,7 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
 	int ret, i;
 	struct rte_pktmbuf_pool_private *mbp_priv;
 	u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_JUMBO_FRAME;
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
 	mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 27d670f843d2..c466256137a3 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,30 +54,30 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -189,10 +189,10 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > DPAA_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 		tx_offloads, dev_tx_offloads_nodis);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		uint32_t max_len;
 
 		DPAA_PMD_DEBUG("enabling jumbo");
@@ -259,7 +259,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 			- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		DPAA_PMD_DEBUG("enabling scatter mode");
 		fman_if_set_sg(dev->process_private, 1);
 		dev->data->scattered_rx = 1;
@@ -304,43 +304,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	/* Configure link only if link is UP*/
 	if (link->link_status) {
-		if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+		if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 			/* Start autoneg only if link is not in autoneg mode */
 			if (!link->link_autoneg)
 				dpaa_restart_link_autoneg(__fif->node_name);
-		} else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
-			switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
-			case ETH_LINK_SPEED_10M_HD:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+		} else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+			switch (eth_conf->link_speeds &  RTE_ETH_LINK_SPEED_FIXED) {
+			case RTE_ETH_LINK_SPEED_10M_HD:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10M:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10M:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M_HD:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M_HD:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_1G:
-				speed = ETH_SPEED_NUM_1G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_1G:
+				speed = RTE_ETH_SPEED_NUM_1G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_2_5G:
-				speed = ETH_SPEED_NUM_2_5G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_2_5G:
+				speed = RTE_ETH_SPEED_NUM_2_5G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10G:
-				speed = ETH_SPEED_NUM_10G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10G:
+				speed = RTE_ETH_SPEED_NUM_10G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			default:
-				speed = ETH_SPEED_NUM_NONE;
-				duplex = ETH_LINK_FULL_DUPLEX;
+				speed = RTE_ETH_SPEED_NUM_NONE;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			}
 			/* Set link speed */
@@ -556,30 +556,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 
 	if (fif->mac_type == fman_mac_1g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G;
 	} else if (fif->mac_type == fman_mac_2_5g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G;
 	} else if (fif->mac_type == fman_mac_10g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G
-					| ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G
+					| RTE_ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, fif->mac_type);
@@ -612,13 +612,13 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-			{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+			{RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+			{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 
 	/* Update Rx offload info */
@@ -645,14 +645,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -686,7 +686,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 			ret = dpaa_get_link_status(__fif->node_name, link);
 			if (ret)
 				return ret;
-			if (link->link_status == ETH_LINK_DOWN &&
+			if (link->link_status == RTE_ETH_LINK_DOWN &&
 			    wait_to_complete)
 				rte_delay_ms(CHECK_INTERVAL);
 			else
@@ -697,15 +697,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	}
 
 	if (ioctl_version < 2) {
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
-		link->link_autoneg = ETH_LINK_AUTONEG;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 		if (fif->mac_type == fman_mac_1g)
-			link->link_speed = ETH_SPEED_NUM_1G;
+			link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		else if (fif->mac_type == fman_mac_2_5g)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else if (fif->mac_type == fman_mac_10g)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
 			DPAA_PMD_ERR("invalid link_speed: %s, %d",
 				     dpaa_intf->name, fif->mac_type);
@@ -981,7 +981,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
 		;
 	} else if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SCATTER) {
+			RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
 			buffsz * DPAA_SGT_MAX_ENTRIES) {
 			DPAA_PMD_ERR("max RxPkt size %d too big to fit "
@@ -1303,7 +1303,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
 	else
 		return dpaa_eth_dev_stop(dev);
 	return 0;
@@ -1319,7 +1319,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
 	else
 		dpaa_eth_dev_start(dev);
 	return 0;
@@ -1349,10 +1349,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	if (fc_conf->mode == RTE_FC_NONE) {
+	if (fc_conf->mode == RTE_ETH_FC_NONE) {
 		return 0;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
-		 fc_conf->mode == RTE_FC_FULL) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_ETH_FC_FULL) {
 		fman_if_set_fc_threshold(dev->process_private,
 					 fc_conf->high_water,
 					 fc_conf->low_water,
@@ -1396,11 +1396,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
 	}
 	ret = fman_if_get_fc_threshold(dev->process_private);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time =
 			fman_if_get_fc_quanta(dev->process_private);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -1663,10 +1663,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
 	fc_conf = dpaa_intf->fc_conf;
 	ret = fman_if_get_fc_threshold(fman_intf);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
 #define DPAA_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP)
 
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1U << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_L2_PAYLOAD:
 
 				if (l2_configured)
 					break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_ETH;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
 
 				if (ipv4_configured)
 					break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV4;
 				break;
 
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (ipv6_configured)
 					break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV6;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
 
 				if (tcp_configured)
 					break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_TCP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (udp_configured)
 					break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_UDP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 641e7027f12e..7c92b2a42e3f 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -216,7 +216,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1ULL << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_L2_PAYLOAD:
 
 				if (l2_configured)
 					break;
@@ -233,7 +233,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 			break;
 
-			case ETH_RSS_MPLS:
+			case RTE_ETH_RSS_MPLS:
 
 				if (mpls_configured)
 					break;
@@ -270,13 +270,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (l3_configured)
 					break;
@@ -314,12 +314,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 			break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_TCP_EX:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (l4_configured)
 					break;
@@ -346,8 +346,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e22..23bb985b95e9 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,34 +38,34 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_CHECKSUM |
-		DEV_RX_OFFLOAD_SCTP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_RSS_HASH |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* enable timestamp in mbuf */
 bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -143,7 +143,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN Filter not avaialble */
 		if (!priv->max_vlan_filters) {
 			DPAA2_PMD_INFO("VLAN filter not available");
@@ -151,7 +151,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 
 		if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
 						      priv->token, true);
 		else
@@ -252,13 +252,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 					dev_rx_offloads_nodis;
 	dev_info->tx_offload_capa = dev_tx_offloads_sup |
 					dev_tx_offloads_nodis;
-	dev_info->speed_capa = ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -271,10 +271,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
 
 	if (dpaa2_svr_family == SVR_LX2160A) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return 0;
@@ -292,16 +292,16 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
-			{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
-			{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
-			{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
-			{DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
-			{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+			{RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+			{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+			{RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
+			{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
 	};
 
 	/* Update Rx offload info */
@@ -328,15 +328,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -559,7 +559,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		tx_offloads, dev_tx_offloads_nodis);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
 			ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
 				priv->token, eth_conf->rxmode.max_rx_pkt_len
@@ -578,7 +578,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
 			ret = dpaa2_setup_flow_dist(dev,
 					eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -592,12 +592,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rx_l3_csum_offload = true;
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
 		rx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -615,7 +615,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	}
 
 #if !defined(RTE_LIBRTE_IEEE1588)
-	if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 #endif
 	{
 		ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -628,12 +628,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
 		tx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -665,8 +665,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 
 	dpaa2_tm_init(dev);
 
@@ -1477,10 +1477,10 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > DPAA2_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
@@ -1881,7 +1881,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
 			return -1;
 		}
-		if (state.up == ETH_LINK_DOWN &&
+		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
 			rte_delay_ms(CHECK_INTERVAL);
 		else
@@ -1893,9 +1893,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 	link.link_speed = state.rate;
 
 	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
@@ -2056,9 +2056,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
 		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_RX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	} else {
 		/* DPNI_LINK_OPT_PAUSE not set
 		 *  if ASYM_PAUSE set,
@@ -2068,9 +2068,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	Flow control disabled
 		 */
 		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return ret;
@@ -2114,14 +2114,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		/* Full flow control;
 		 * OPT_PAUSE set, ASYM_PAUSE not set
 		 */
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		/* Enable RX flow control
 		 * OPT_PAUSE not set;
 		 * ASYM_PAUSE set;
@@ -2129,7 +2129,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_PAUSE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		/* Enable TX Flow control
 		 * OPT_PAUSE set
 		 * ASYM_PAUSE set
@@ -2137,7 +2137,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		/* Disable Flow control
 		 * OPT_PAUSE not set
 		 * ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b9c729f6cdc0..ca75a2175524 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,12 +65,12 @@
 #define DPAA2_TX_CONF_ENABLE	0x08
 
 #define DPAA2_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP | \
-	ETH_RSS_MPLS)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP | \
+	RTE_ETH_RSS_MPLS)
 
 /* LX2 FRC Parsed values (Little Endian) */
 #define DPAA2_PKT_TYPE_ETHER		0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP)
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			rte_vlan_strip(bufs[num_rx]);
 
 		dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 							eth_data->port_id);
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP) {
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			rte_vlan_strip(bufs[num_rx]);
 		}
 
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 					if (unlikely(((*bufs)->ol_flags
 						& PKT_TX_VLAN_PKT) ||
 						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
 				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+				& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6f4..ca488fea966f 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -81,15 +81,15 @@
 #define E1000_FTQF_QUEUE_ENABLE          0x00000100
 
 #define IGB_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 /*
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..81f8bc3cd746 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -599,8 +599,8 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	e1000_clear_hw_cntrs_base_generic(hw);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | \
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_em_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -613,39 +613,39 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -1104,9 +1104,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	/* Preferred queue parameters */
 	dev_info->default_rxportconf.nb_queues = 1;
@@ -1164,17 +1164,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -1426,15 +1426,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if(mask & RTE_ETH_VLAN_STRIP_MASK){
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			em_vlan_hw_strip_enable(dev);
 		else
 			em_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if(mask & RTE_ETH_VLAN_FILTER_MASK){
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			em_vlan_hw_filter_enable(dev);
 		else
 			em_vlan_hw_filter_disable(dev);
@@ -1603,7 +1603,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
 	if (link.link_status) {
 		PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
 			     dev->data->port_id, link.link_speed,
-			     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			     "full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1685,13 +1685,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -1820,11 +1820,11 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* switch to jumbo mode if needed */
 	if (frame_size > E1000_ETH_MAX_LEN) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl |= E1000_RCTL_LPE;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl &= ~E1000_RCTL_LPE;
 	}
 	E1000_WRITE_REG(hw, E1000_RCTL, rctl);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd0074..cf672c32277b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
 	struct em_rx_entry *sw_ring;   /**< address of RX software ring. */
 	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
 	struct rte_mbuf *pkt_last_seg;  /**< Last segment of current packet. */
-	uint64_t	    offloads;   /**< Offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
 	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
 	uint16_t            rx_tail;    /**< current value of RDT register. */
 	uint16_t            nb_rx_hold; /**< number of held free RX desc. */
@@ -172,7 +172,7 @@ struct em_tx_queue {
 	uint8_t                wthresh;  /**< Write-back threshold register. */
 	struct em_ctx_info ctx_cache;
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 #if 1
@@ -1168,11 +1168,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	RTE_SET_USED(dev);
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	return tx_offload_capa;
 }
@@ -1367,15 +1367,15 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 	max_rx_pktlen = em_get_max_pktlen(dev);
 
 	rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP  |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		DEV_RX_OFFLOAD_UDP_CKSUM   |
-		DEV_RX_OFFLOAD_TCP_CKSUM   |
-		DEV_RX_OFFLOAD_KEEP_CRC    |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 	if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
-		rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	return rx_offload_capa;
 }
@@ -1468,7 +1468,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 	rxq->queue_id = queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1806,7 +1806,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -1839,7 +1839,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		 * to avoid splitting packets that don't fit into
 		 * one buffer.
 		 */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME ||
 				rctl_bsize < RTE_ETHER_MAX_LEN) {
 			if (!dev->data->scattered_rx)
 				PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1849,7 +1849,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1862,7 +1862,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1874,21 +1874,21 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	if ((hw->mac.type == e1000_ich9lan ||
 			hw->mac.type == e1000_pch2lan ||
 			hw->mac.type == e1000_ich10lan) &&
-			rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+			rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
 		E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
 		E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
 	}
 
 	if (hw->mac.type == e1000_pch2lan) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 			e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
 		else
 			e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
 	}
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1908,7 +1908,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	/*
 	 * Configure support of jumbo frames, if any.
 	 */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		rctl |= E1000_RCTL_LPE;
 	else
 		rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 10ee0f33415a..7a35d7d89eb1 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1082,21 +1082,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 	uint16_t nb_rx_q = dev->data->nb_rx_queues;
 	uint16_t nb_tx_q = dev->data->nb_tx_queues;
 
-	if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
-	    tx_mq_mode == ETH_MQ_TX_DCB ||
-	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* Check multi-queue mode.
-		 * To no break software we accept ETH_MQ_RX_NONE as this might
+		 * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
 		 * be used to turn off VLAN filter.
 		 */
 
-		if (rx_mq_mode == ETH_MQ_RX_NONE ||
-		    rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+		if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+		    rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
 		} else {
 			/* Only support one queue on VFs.
@@ -1108,12 +1108,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 		/* TX mode is not used here, so mode might be ignored.*/
-		if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(WARNING, "SRIOV is active,"
 					" TX mode %d is not supported. "
 					" Driver will behave as %d mode.",
-					tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+					tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
 		}
 
 		/* check valid queue number */
@@ -1126,17 +1126,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 		/* To no break software that set invalid mode, only display
 		 * warning if invalid mode is used.
 		 */
-		if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		    rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
-		    rx_mq_mode != ETH_MQ_RX_RSS) {
+		if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 			/* RSS together with VMDq not supported*/
 			PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				     rx_mq_mode);
 			return -EINVAL;
 		}
 
-		if (tx_mq_mode != ETH_MQ_TX_NONE &&
-		    tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+		    tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
 					" Due to txmode is meaningless in this"
 					" driver, just ignore.",
@@ -1155,8 +1155,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = igb_check_mq_mode(dev);
@@ -1296,8 +1296,8 @@ eth_igb_start(struct rte_eth_dev *dev)
 	/*
 	 * VLAN Offload Settings
 	 */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK | \
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_igb_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1305,7 +1305,7 @@ eth_igb_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable VLAN filter since VMDq always use VLAN filter */
 		igb_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1319,39 +1319,39 @@ eth_igb_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -2194,21 +2194,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	case e1000_82576:
 		dev_info->max_rx_queues = 16;
 		dev_info->max_tx_queues = 16;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 16;
 		break;
 
 	case e1000_82580:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
 	case e1000_i350:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
@@ -2234,7 +2234,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		return -EINVAL;
 	}
 	dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2260,9 +2260,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2305,12 +2305,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
 	dev_info->max_rx_pktlen  = 0x3FFF; /* See RLPML register. */
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	switch (hw->mac.type) {
 	case e1000_vfadapt:
 		dev_info->max_rx_queues = 2;
@@ -2411,17 +2411,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else if (!link_check) {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2597,7 +2597,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= E1000_CTRL_EXT_EXT_VLAN;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg = E1000_READ_REG(hw, E1000_VET);
 		reg = (reg & (~E1000_VET_VET_EXT)) |
 			((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2686,7 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
 
 	/* Update maximum packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		E1000_WRITE_REG(hw, E1000_RLPML,
 				dev->data->dev_conf.rxmode.max_rx_pkt_len);
 }
@@ -2704,7 +2704,7 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
 
 	/* Update maximum packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		E1000_WRITE_REG(hw, E1000_RLPML,
 			dev->data->dev_conf.rxmode.max_rx_pkt_len +
 						VLAN_TAG_SIZE);
@@ -2716,22 +2716,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if(mask & RTE_ETH_VLAN_STRIP_MASK){
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igb_vlan_hw_strip_enable(dev);
 		else
 			igb_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if(mask & RTE_ETH_VLAN_FILTER_MASK){
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igb_vlan_hw_filter_enable(dev);
 		else
 			igb_vlan_hw_filter_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_EXTEND_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if(mask & RTE_ETH_VLAN_EXTEND_MASK){
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			igb_vlan_hw_extend_enable(dev);
 		else
 			igb_vlan_hw_extend_disable(dev);
@@ -2883,7 +2883,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
 				     " Port %d: Link Up - speed %u Mbps - %s",
 				     dev->data->port_id,
 				     (unsigned)link.link_speed,
-				     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				     "full-duplex" : "half-duplex");
 		} else {
 			PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3037,13 +3037,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3112,18 +3112,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 * on configuration
 		 */
 		switch (fc_conf->mode) {
-		case RTE_FC_NONE:
+		case RTE_ETH_FC_NONE:
 			ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_RX_PAUSE:
+		case RTE_ETH_FC_RX_PAUSE:
 			ctrl |= E1000_CTRL_RFCE;
 			ctrl &= ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_TX_PAUSE:
+		case RTE_ETH_FC_TX_PAUSE:
 			ctrl |= E1000_CTRL_TFCE;
 			ctrl &= ~E1000_CTRL_RFCE;
 			break;
-		case RTE_FC_FULL:
+		case RTE_ETH_FC_FULL:
 			ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
 			break;
 		default:
@@ -3271,22 +3271,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -3584,10 +3584,10 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
@@ -3625,10 +3625,10 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
@@ -4407,11 +4407,11 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* switch to jumbo mode if needed */
 	if (frame_size > E1000_ETH_MAX_LEN) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl |= E1000_RCTL_LPE;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl &= ~E1000_RCTL_LPE;
 	}
 	E1000_WRITE_REG(hw, E1000_RCTL, rctl);
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
 	if (*vfinfo == NULL)
 		rte_panic("Cannot allocate memory for private VF data\n");
 
-	RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+	RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712af..78c85fdbb51c 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /**
@@ -185,7 +185,7 @@ struct igb_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 #if 1
@@ -1456,13 +1456,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	RTE_SET_USED(dev);
-	tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_SCTP_CKSUM  |
-			  DEV_TX_OFFLOAD_TCP_TSO     |
-			  DEV_TX_OFFLOAD_MULTI_SEGS;
+	tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return tx_offload_capa;
 }
@@ -1635,20 +1635,20 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP  |
-			  DEV_RX_OFFLOAD_VLAN_FILTER |
-			  DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_RX_OFFLOAD_UDP_CKSUM   |
-			  DEV_RX_OFFLOAD_TCP_CKSUM   |
-			  DEV_RX_OFFLOAD_JUMBO_FRAME |
-			  DEV_RX_OFFLOAD_KEEP_CRC    |
-			  DEV_RX_OFFLOAD_SCATTER     |
-			  DEV_RX_OFFLOAD_RSS_HASH;
+	rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			  RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+			  RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+			  RTE_ETH_RX_OFFLOAD_SCATTER     |
+			  RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == e1000_i350 ||
 	    hw->mac.type == e1000_i210 ||
 	    hw->mac.type == e1000_i211)
-		rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	return rx_offload_capa;
 }
@@ -1729,7 +1729,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1963,23 +1963,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
 }
@@ -2045,23 +2045,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -2183,15 +2183,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 			E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
 			E1000_VMOLR_MPME);
 
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 			vmolr |= E1000_VMOLR_AUPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 			vmolr |= E1000_VMOLR_ROMPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 			vmolr |= E1000_VMOLR_ROPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 			vmolr |= E1000_VMOLR_BAM;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 			vmolr |= E1000_VMOLR_MPME;
 
 		E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2228,7 +2228,7 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 	for (i = 0; i < cfg->nb_pool_maps; i++) {
 		/* set vlan id in VF register and set the valid bit */
 		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
-                        (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
+                        (cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) | \
 			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
 			E1000_VLVF_POOLSEL_MASK)));
 	}
@@ -2281,7 +2281,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t mrqc;
 
-	if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+	if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
 		/*
 		 * SRIOV active scheme
 		 * FIXME if support RSS together with VMDq & SRIOV
@@ -2295,14 +2295,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-			case ETH_MQ_RX_RSS:
+			case RTE_ETH_MQ_RX_RSS:
 				igb_rss_configure(dev);
 				break;
-			case ETH_MQ_RX_VMDQ_ONLY:
+			case RTE_ETH_MQ_RX_VMDQ_ONLY:
 				/*Configure general VMDQ only RX parameters*/
 				igb_vmdq_rx_hw_configure(dev);
 				break;
-			case ETH_MQ_RX_NONE:
+			case RTE_ETH_MQ_RX_NONE:
 				/* if mq_mode is none, disable rss mode.*/
 			default:
 				igb_rss_disable(dev);
@@ -2342,7 +2342,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	/*
 	 * Configure support of jumbo frames, if any.
 	 */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
 
 		rctl |= E1000_RCTL_LPE;
@@ -2351,7 +2351,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Set maximum packet length by default, and might be updated
 		 * together with enabling/disabling dual VLAN.
 		 */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			max_len += VLAN_TAG_SIZE;
 
 		E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2387,7 +2387,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -2458,7 +2458,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2502,16 +2502,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= E1000_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
 	if (rxmode->offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		rxcsum |= E1000_RXCSUM_TUOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_TUOFL;
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2519,7 +2519,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 
 		/* clear STRCRC bit in all queues */
@@ -2559,7 +2559,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
 
 	/* Make sure VLAN Filters are off. */
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
 		rctl &= ~E1000_RCTL_VFE;
 	/* Don't store bad packets. */
 	rctl &= ~E1000_RCTL_SBP;
@@ -2758,7 +2758,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68a7..4e3ee72608f4 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -116,10 +116,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define ENA_STATS_ARRAY_TX	ARRAY_SIZE(ena_stats_tx_strings)
 #define ENA_STATS_ARRAY_RX	ARRAY_SIZE(ena_stats_rx_strings)
 
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
-			DEV_TX_OFFLOAD_UDP_CKSUM |\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 #define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
 		       PKT_TX_IP_CKSUM |\
 		       PKT_TX_TCP_SEG)
@@ -310,7 +310,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
 		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
 			ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -318,7 +318,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L3 checksum is needed */
 		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
 		if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -335,12 +335,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L4 checksum is needed */
 		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
 				PKT_TX_UDP_CKSUM) &&
-				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+				(queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else {
@@ -623,9 +623,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_link *link = &dev->data->dev_link;
 	struct ena_adapter *adapter = dev->data->dev_private;
 
-	link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
-	link->link_speed = ETH_SPEED_NUM_NONE;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+	link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return 0;
 }
@@ -684,7 +684,7 @@ static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
 	uint32_t max_frame_len = adapter->max_mtu;
 
 	if (adapter->edev_data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_JUMBO_FRAME)
+	    RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		max_frame_len =
 			adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
 
@@ -915,7 +915,7 @@ static int ena_start(struct rte_eth_dev *dev)
 	if (rc)
 		goto err_start_tx;
 
-	if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		rc = ena_rss_configure(adapter);
 		if (rc)
 			goto err_rss_init;
@@ -1854,9 +1854,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
 
 	adapter->state = ENA_ADAPTER_STATE_CONFIG;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
-	dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+	dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	adapter->tx_selected_offloads = dev->data->dev_conf.txmode.offloads;
 	adapter->rx_selected_offloads = dev->data->dev_conf.rxmode.offloads;
@@ -1907,36 +1907,36 @@ static int ena_infos_get(struct rte_eth_dev *dev,
 	ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
 
 	dev_info->speed_capa =
-			ETH_LINK_SPEED_1G   |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_5G   |
-			ETH_LINK_SPEED_10G  |
-			ETH_LINK_SPEED_25G  |
-			ETH_LINK_SPEED_40G  |
-			ETH_LINK_SPEED_50G  |
-			ETH_LINK_SPEED_100G;
+			RTE_ETH_LINK_SPEED_1G   |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_5G   |
+			RTE_ETH_LINK_SPEED_10G  |
+			RTE_ETH_LINK_SPEED_25G  |
+			RTE_ETH_LINK_SPEED_40G  |
+			RTE_ETH_LINK_SPEED_50G  |
+			RTE_ETH_LINK_SPEED_100G;
 
 	/* Set Tx & Rx features available for device */
 	if (adapter->offloads.tso4_supported)
-		tx_feat	|= DEV_TX_OFFLOAD_TCP_TSO;
+		tx_feat	|= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (adapter->offloads.tx_csum_supported)
-		tx_feat |= DEV_TX_OFFLOAD_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM;
+		tx_feat |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (adapter->offloads.rx_csum_supported)
-		rx_feat |= DEV_RX_OFFLOAD_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM  |
-			DEV_RX_OFFLOAD_TCP_CKSUM;
+		rx_feat |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
-	rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-	tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	rx_feat |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+	tx_feat |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Inform framework about available features */
 	dev_info->rx_offload_capa = rx_feat;
 	if (adapter->offloads.rss_hash_supported)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->rx_queue_offload_capa = rx_feat;
 	dev_info->tx_offload_capa = tx_feat;
 	dev_info->tx_queue_offload_capa = tx_feat;
@@ -2100,7 +2100,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 #endif
 
-	fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+	fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	descs_in_use = rx_ring->ring_size -
 		ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 06ac8b06b5cb..3b1844e50982 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -54,8 +54,8 @@
 
 #define ENA_HASH_KEY_SIZE		40
 
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-			ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define ENA_IO_TXQ_IDX(q)		(2 * (q))
 #define ENA_IO_RXQ_IDX(q)		(2 * (q) + 1)
diff --git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 88afe13da04d..3193faf1fa8c 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -140,7 +140,7 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	    (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL)))
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -200,34 +200,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Convert proto to ETH flag */
 	switch (proto) {
 	case ENA_ADMIN_RSS_TCP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		break;
 	case ENA_ADMIN_RSS_TCP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 		break;
 	case ENA_ADMIN_RSS_IP4:
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 		break;
 	case ENA_ADMIN_RSS_IP6:
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 		break;
 	case ENA_ADMIN_RSS_IP4_FRAG:
-		rss_hf |= ETH_RSS_FRAG_IPV4;
+		rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
 		break;
 	case ENA_ADMIN_RSS_NOT_IP:
-		rss_hf |= ETH_RSS_L2_PAYLOAD;
+		rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
 		break;
 	case ENA_ADMIN_RSS_TCP6_EX:
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 		break;
 	case ENA_ADMIN_RSS_IP6_EX:
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 		break;
 	default:
 		break;
@@ -236,10 +236,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L3. */
 	switch (fields & ENA_HF_RSS_ALL_L3) {
 	case ENA_ADMIN_RSS_L3_SA:
-		rss_hf |= ETH_RSS_L3_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L3_DA:
-		rss_hf |= ETH_RSS_L3_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
 		break;
 	default:
 		break;
@@ -248,10 +248,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L4. */
 	switch (fields & ENA_HF_RSS_ALL_L4) {
 	case ENA_ADMIN_RSS_L4_SP:
-		rss_hf |= ETH_RSS_L4_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L4_DP:
-		rss_hf |= ETH_RSS_L4_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
 		break;
 	default:
 		break;
@@ -269,11 +269,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
 
 	/* Determine which fields of L3 should be used. */
-	switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
-	case ETH_RSS_L3_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+	case RTE_ETH_RSS_L3_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_DA;
 		break;
-	case ETH_RSS_L3_SRC_ONLY:
+	case RTE_ETH_RSS_L3_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_SA;
 		break;
 	default:
@@ -285,11 +285,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	}
 
 	/* Determine which fields of L4 should be used. */
-	switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
-	case ETH_RSS_L4_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+	case RTE_ETH_RSS_L4_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_DP;
 		break;
-	case ETH_RSS_L4_SRC_ONLY:
+	case RTE_ETH_RSS_L4_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_SP;
 		break;
 	default:
@@ -335,43 +335,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
 	int rc, i;
 
 	/* Turn on appropriate fields for each requested packet type */
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
 
-	if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+	if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
 		selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
 
@@ -542,7 +542,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
 	uint16_t admin_hf;
 	static bool warn_once;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index b496cd470045..e0fb44edeb41 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
 
 	if (status & ENETC_LINK_MODE)
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 
 	if (status & ENETC_LINK_STATUS)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	switch (status & ENETC_LINK_SPEED_MASK) {
 	case ENETC_LINK_SPEED_1G:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case ENETC_LINK_SPEED_100M:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	default:
 	case ENETC_LINK_SPEED_10M:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -207,11 +207,11 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 	dev_info->max_tx_queues = MAX_TX_RINGS;
 	dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
 	dev_info->rx_offload_capa =
-		(DEV_RX_OFFLOAD_IPV4_CKSUM |
-		 DEV_RX_OFFLOAD_UDP_CKSUM |
-		 DEV_RX_OFFLOAD_TCP_CKSUM |
-		 DEV_RX_OFFLOAD_KEEP_CRC |
-		 DEV_RX_OFFLOAD_JUMBO_FRAME);
+		(RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
 
 	return 0;
 }
@@ -462,7 +462,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
 			       RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 
-	rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+	rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				     RTE_ETHER_CRC_LEN : 0);
 
 	return 0;
@@ -679,10 +679,10 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > ENETC_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads &=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
 	enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
@@ -708,7 +708,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		uint32_t max_len;
 
 		max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -723,7 +723,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 			RTE_ETHER_CRC_LEN;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		int config;
 
 		config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -731,10 +731,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 		enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		checksum &= ~L3_CKSUM;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		checksum &= ~L4_CKSUM;
 
 	enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b8f..d4858326ed7a 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
 	uint16_t sub_devid;
 	uint32_t capa;
 } vic_speed_capa_map[] = {
-	{ 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
-	{ 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
-	{ 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
-	{ 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
-	{ 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
-	{ 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
-	{ 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
-	{ 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
-	{ 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
-	{ 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
-	{ 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
-	{ 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
-	{ 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
-	{ 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
-	{ 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1440 Mezz */
-	{ 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1480 MLOM */
-	{ 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
-	{ 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
-	{ 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
-	{ 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
-	{ 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
-	{ 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+	{ 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+	{ 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+	{ 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+	{ 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+	{ 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+	{ 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+	{ 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+	{ 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+	{ 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+	{ 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+	{ 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+	{ 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+	{ 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+	{ 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+	{ 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+	{ 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+	{ 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+	{ 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+	{ 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+	{ 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+	{ 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+	{ 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
 	{ 0, 0 }, /* End marker */
 };
 
@@ -293,8 +293,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	ENICPMD_FUNC_TRACE();
 
 	offloads = eth_dev->data->dev_conf.rxmode.offloads;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			enic->ig_vlan_strip_en = 1;
 		else
 			enic->ig_vlan_strip_en = 0;
@@ -319,17 +319,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
 		return ret;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	enic->mc_count = 0;
 	enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
-				  DEV_RX_OFFLOAD_CHECKSUM);
+				  RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* All vlan offload masks to apply the current settings */
-	mask = ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = enicpmd_vlan_offload_set(eth_dev, mask);
 	if (ret) {
 		dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -431,14 +431,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
 	}
 	/* 1300 and later models are at least 40G */
 	if (id >= 0x0100)
-		return ETH_LINK_SPEED_40G;
+		return RTE_ETH_LINK_SPEED_40G;
 	/* VFs have subsystem id 0, check device id */
 	if (id == 0) {
 		/* Newer VF implies at least 40G model */
 		if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
-			return ETH_LINK_SPEED_40G;
+			return RTE_ETH_LINK_SPEED_40G;
 	}
-	return ETH_LINK_SPEED_10G;
+	return RTE_ETH_LINK_SPEED_10G;
 }
 
 static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -879,7 +879,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
 	 */
 	conf->offloads = enic->rx_offload_capa;
 	if (!enic->ig_vlan_strip_en)
-		conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* rx_thresh and other fields are not applicable for enic */
 }
 
@@ -965,8 +965,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
 static int udp_tunnel_common_check(struct enic *enic,
 				   struct rte_eth_udp_tunnel *tnl)
 {
-	if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
-	    tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+	if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+	    tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
 		return -ENOTSUP;
 	if (!enic->overlay_offload) {
 		ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1006,7 +1006,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
@@ -1035,7 +1035,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..754cf362c6d8 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
 
 	memset(&link, 0, sizeof(link));
 	link.link_status = enic_get_link_status(enic);
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = vnic_dev_port_speed(enic->vdev);
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
 	}
 
 	eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* vnic notification of link status has already been turned on in
 	 * enic_dev_init() which is called during probe time.  Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
 	 * and vlan insertion are supported.
 	 */
 	simple_tx_offloads = enic->tx_offload_capa &
-		(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_VLAN_INSERT |
-		 DEV_TX_OFFLOAD_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_UDP_CKSUM |
-		 DEV_TX_OFFLOAD_TCP_CKSUM);
+		(RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if ((eth_dev->data->dev_conf.txmode.offloads &
 	     ~simple_tx_offloads) == 0) {
 		ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
 	max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
 
 	if (enic->rte_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_SCATTER) {
+	    RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
 		/* ceil((max pkt len)/mbuf_size) */
 		mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
@@ -1386,15 +1386,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 	rss_hash_type = 0;
 	rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
 	if (enic->rq_count > 1 &&
-	    (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+	    (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
 	    rss_hf != 0) {
 		rss_enable = 1;
-		if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			      ETH_RSS_NONFRAG_IPV4_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
 			if (enic->udp_rss_weak) {
 				/*
@@ -1405,12 +1405,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
 			}
 		}
-		if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
-			      ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+			      RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
 			if (enic->udp_rss_weak)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1751,9 +1751,9 @@ enic_enable_overlay_offload(struct enic *enic)
 		return -EINVAL;
 	}
 	enic->tx_offload_capa |=
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
-		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		(enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+		(enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
 		PKT_TX_OUTER_IPV6 |
 		PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index a8f5332a407f..12f734260ca5 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
 		 * IPV4 hash type handles both non-frag and frag packet types.
 		 * TCP/UDP is controlled via a separate flag below.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
-			ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+			RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (ENIC_SETTING(enic, RSSHASH_IPV6))
 		/*
 		 * The VIC adapter can perform RSS on IPv6 packets with and
 		 * without extension headers. An IPv6 "fragment" is an IPv6
 		 * packet with the fragment extension header.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
-			ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+			RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
-			ETH_RSS_IPV6_TCP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			RTE_ETH_RSS_IPV6_TCP_EX;
 	if (enic->udp_rss_weak)
 		enic->flow_type_rss_offloads |=
-			ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 
 	/* Zero offloads if RSS is not enabled */
 	if (!ENIC_SETTING(enic, RSS))
@@ -201,20 +201,20 @@ int enic_get_vnic_config(struct enic *enic)
 	enic->tx_queue_offload_capa = 0;
 	enic->tx_offload_capa =
 		enic->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	enic->rx_offload_capa =
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
 		PKT_TX_IPV6 |
 		PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index 8216063a3d8b..9b22a6ce8941 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
 
 const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
 static const struct rte_eth_link eth_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_UP,
-	.link_autoneg = ETH_LINK_AUTONEG,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_UP,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG,
 };
 
 static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
 	int qid;
 	struct rte_eth_dev *fsdev;
 	struct rxq **rxq;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 				&ETH(sdev)->data->dev_conf.intr_conf;
 
 	fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
 failsafe_rx_intr_install(struct rte_eth_dev *dev)
 {
 	struct fs_priv *priv = PRIV(dev);
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 			&priv->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e034..8cb215651df8 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1182,53 +1182,53 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
 	 * configuring a sub-device.
 	 */
 	infos->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->rx_queue_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	infos->flow_type_rss_offloads =
-		ETH_RSS_IP |
-		ETH_RSS_UDP |
-		ETH_RSS_TCP;
+		RTE_ETH_RSS_IP |
+		RTE_ETH_RSS_UDP |
+		RTE_ETH_RSS_TCP;
 	infos->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc4b..7af115399e0f 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
 	uint8_t drop_en;
 	uint8_t rx_deferred_start; /* don't start this queue in dev start. */
 	uint16_t rx_ftag_en; /* indicates FTAG RX supported */
-	uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
 	uint16_t next_rs; /* Next pos to set RS flag */
 	uint16_t next_dd; /* Next pos to check DD flag */
 	volatile uint32_t *tail_ptr;
-	uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
 	uint16_t nb_desc;
 	uint16_t port_id;
 	uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e4021..b5935d714a37 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
 
 	vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
 
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 
-	if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		return 0;
 
 	if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 		0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
 	};
 
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
 		dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
 		FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
 		return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 	 */
 	hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	if (mrqc == 0) {
 		PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	if (hw->mac.type != fm10k_mac_pf)
 		return;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		nb_queue_pools = vmdq_conf->nb_queue_pools;
 
 	/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
 				2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
-			rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+			rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 			uint32_t reg;
 			dev->data->scattered_rx = 1;
 			reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
 	}
 
 	/* Update default vlan when not in VMDQ mode */
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
 
 	fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
 		FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
 	PMD_INIT_FUNC_TRACE();
 
-	dev->data->dev_link.link_speed  = ETH_SPEED_NUM_50G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_speed  = RTE_ETH_SPEED_NUM_50G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	dev->data->dev_link.link_status =
-		dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+		dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return 0;
 }
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 	dev_info->max_vfs            = pdev->max_vfs;
 	dev_info->vmdq_pool_base     = 0;
 	dev_info->vmdq_queue_base    = 0;
-	dev_info->max_vmdq_pools     = ETH_32_POOLS;
+	dev_info->max_vmdq_pools     = RTE_ETH_32_POOLS;
 	dev_info->vmdq_queue_num     = FM10K_MAX_QUEUES_PF;
 	dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
 	dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 
 	dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
 	dev_info->reta_size = FM10K_MAX_RSS_INDICES;
-	dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-					ETH_RSS_IPV6 |
-					ETH_RSS_IPV6_EX |
-					ETH_RSS_NONFRAG_IPV4_TCP |
-					ETH_RSS_NONFRAG_IPV6_TCP |
-					ETH_RSS_IPV6_TCP_EX |
-					ETH_RSS_NONFRAG_IPV4_UDP |
-					ETH_RSS_NONFRAG_IPV6_UDP |
-					ETH_RSS_IPV6_UDP_EX;
+	dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+					RTE_ETH_RSS_IPV6 |
+					RTE_ETH_RSS_IPV6_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+					RTE_ETH_RSS_IPV6_TCP_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+					RTE_ETH_RSS_IPV6_UDP_EX;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 		.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-			ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+			RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 		return -EINVAL;
 	}
 
-	if (vlan_id > ETH_VLAN_ID_MAX) {
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
 		PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
 		return -EINVAL;
 	}
@@ -1767,21 +1767,21 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+	return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
 }
 
 static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return  (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP  |
-			   DEV_RX_OFFLOAD_VLAN_FILTER |
-			   DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			   DEV_RX_OFFLOAD_UDP_CKSUM   |
-			   DEV_RX_OFFLOAD_TCP_CKSUM   |
-			   DEV_RX_OFFLOAD_JUMBO_FRAME |
-			   DEV_RX_OFFLOAD_HEADER_SPLIT |
-			   DEV_RX_OFFLOAD_RSS_HASH);
+	return  (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+			   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH);
 }
 
 static int
@@ -1966,12 +1966,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_MULTI_SEGS  |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_TSO);
+	return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO);
 }
 
 static int
@@ -2199,15 +2199,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	/* If the mapping doesn't fit any supported, return */
 	if (mrqc == 0)
@@ -2244,15 +2244,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
 	hf = 0;
-	hf |= (mrqc & FM10K_MRQC_IPV4)     ? ETH_RSS_IPV4              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6_EX           : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX       : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV4)     ? RTE_ETH_RSS_IPV4              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6_EX           : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX       : 0;
 
 	rss_conf->rss_hf = hf;
 
@@ -2607,7 +2607,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 
 			/* first clear the internal SW recording structure */
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					false);
 
@@ -2623,7 +2623,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 					MAIN_VSI_POOL_NUMBER);
 
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					true);
 
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 #ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
 	/* whithout rx ol_flags, no VP flag report */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 #endif
 
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 		return -1;
 
 	/* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
 static int hinic_link_event_process(struct hinic_hwdev *hwdev,
 				    struct rte_eth_dev *eth_dev, u8 status)
 {
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 	struct nic_port_info port_info;
 	struct rte_eth_link link;
 	int rc = HINIC_OK;
 
 	if (!status) {
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
 		memset(&port_info, 0, sizeof(port_info));
 		rc = hinic_get_port_info(hwdev, &port_info);
 		if (rc) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
-			link.link_autoneg = ETH_LINK_FIXED;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+			link.link_autoneg = RTE_ETH_LINK_FIXED;
 		} else {
 			link.link_speed = port_speed[port_info.speed %
 						LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 1a7240154668..17f32692fb2d 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* mtu size is 256~9600 */
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 
 	/* init vlan offoad */
 	err = hinic_vlan_offload_set(dev,
-				ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+				RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
 		(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
 	} else {
 		*speed_capa = 0;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
-			*speed_capa |= ETH_LINK_SPEED_1G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_1G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
-			*speed_capa |= ETH_LINK_SPEED_10G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_10G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
-			*speed_capa |= ETH_LINK_SPEED_25G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_25G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
-			*speed_capa |= ETH_LINK_SPEED_40G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_40G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
-			*speed_capa |= ETH_LINK_SPEED_100G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	}
 }
 
@@ -732,25 +732,25 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 
 	hinic_get_speed_capa(dev, &info->speed_capa);
 	info->rx_queue_offload_capa = 0;
-	info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_UDP_CKSUM |
-				DEV_RX_OFFLOAD_TCP_CKSUM |
-				DEV_RX_OFFLOAD_VLAN_FILTER |
-				DEV_RX_OFFLOAD_SCATTER |
-				DEV_RX_OFFLOAD_JUMBO_FRAME |
-				DEV_RX_OFFLOAD_TCP_LRO |
-				DEV_RX_OFFLOAD_RSS_HASH;
+	info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				RTE_ETH_RX_OFFLOAD_SCATTER |
+				RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+				RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	info->tx_queue_offload_capa = 0;
-	info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_UDP_CKSUM |
-				DEV_TX_OFFLOAD_TCP_CKSUM |
-				DEV_TX_OFFLOAD_SCTP_CKSUM |
-				DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_TCP_TSO |
-				DEV_TX_OFFLOAD_MULTI_SEGS;
+	info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	info->hash_key_size = HINIC_RSS_KEY_SIZE;
 	info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -847,20 +847,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
 	u8 port_link_status = 0;
 	struct nic_port_info port_link_info;
 	struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 
 	rc = hinic_get_link_status(nic_hwdev, &port_link_status);
 	if (rc)
 		return rc;
 
 	if (!port_link_status) {
-		link->link_status = ETH_LINK_DOWN;
+		link->link_status = RTE_ETH_LINK_DOWN;
 		link->link_speed = 0;
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
-		link->link_autoneg = ETH_LINK_FIXED;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_FIXED;
 		return HINIC_OK;
 	}
 
@@ -902,8 +902,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* Get link status information from hardware */
 		rc = hinic_priv_get_dev_link_status(nic_dev, &link);
 		if (rc != HINIC_OK) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Get link status failed");
 			goto out;
 		}
@@ -1552,10 +1552,10 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 	frame_size = HINIC_MTU_TO_PKTLEN(mtu);
 	if (frame_size > HINIC_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 	nic_dev->mtu_size = mtu;
@@ -1664,8 +1664,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	int err;
 
 	/* Enable or disable VLAN filter */
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
 			TRUE : FALSE;
 		err = hinic_config_vlan_filter(nic_dev->hwdev, on);
 		if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1686,8 +1686,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Enable or disable VLAN stripping */
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
 			TRUE : FALSE;
 		err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
 		if (err) {
@@ -1873,13 +1873,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
 	fc_conf->autoneg = nic_pause.auto_neg;
 
 	if (nic_pause.tx_pause && nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (nic_pause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else if (nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -1893,14 +1893,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	nic_pause.auto_neg = fc_conf->autoneg;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		nic_pause.tx_pause = true;
 	else
 		nic_pause.tx_pause = false;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		nic_pause.rx_pause = true;
 	else
 		nic_pause.rx_pause = false;
@@ -1944,7 +1944,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err = 0;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_OK;
 	}
@@ -1965,14 +1965,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 		}
 	}
 
-	rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 
 	err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
 	if (err) {
@@ -2008,7 +2008,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_ERROR;
 	}
@@ -2029,15 +2029,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	rss_conf->rss_hf |=  rss_type.ipv4 ?
-		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
 	rss_conf->rss_hf |=  rss_type.ipv6 ?
-		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
-	rss_conf->rss_hf |=  rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+	rss_conf->rss_hf |=  rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
 
 	return HINIC_OK;
 }
@@ -2067,7 +2067,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 	u16 i = 0;
 	u16 idx, shift;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
 		return HINIC_OK;
 
 	if (reta_size != NIC_RSS_INDIR_SIZE) {
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
 {
 	u64 rss_hf = rss_conf->rss_hf;
 
-	rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 }
 
 static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 {
 	int err, i;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 		nic_dev->num_rss = 0;
 		if (nic_dev->num_rq > 1) {
 			/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 				PMD_DRV_LOG(WARNING, "Alloc rss template failed");
 				return err;
 			}
-			nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+			nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
 			for (i = 0; i < nic_dev->num_rq; i++)
 				hinic_add_rq_to_rx_queue_list(nic_dev, i);
 		}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 
 static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
 {
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (hinic_rss_template_free(nic_dev->hwdev,
 					    nic_dev->rss_tmpl_idx))
 			PMD_DRV_LOG(WARNING, "Free rss template failed");
 
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 	}
 }
 
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
 	int ret = 0;
 
 	switch (dev_conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		ret = hinic_config_mq_rx_rss(nic_dev, on);
 		break;
 	default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	int lro_wqe_num;
 	int buf_size;
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (rss_conf.rss_hf == 0) {
 			rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
 		} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
 
 	err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 		goto rx_csum_ofl_err;
 
 	/* config lro */
-	lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+	lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
 			true : false;
 	max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
 	buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
 {
 	struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		hinic_rss_deinit(nic_dev);
 		hinic_destroy_num_qps(nic_dev);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
 #define HINIC_DEFAULT_RX_FREE_THRESH	32
 
 #define HINIC_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |\
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 enum rq_completion_fmt {
 	RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index b71e2e9ea451..953c146d0200 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 		if (dcb_rx_conf->nb_tcs == 0)
 			hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
 	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	int ret;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		return 0;
 
 	ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
 hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
 {
 	switch (mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->requested_fc_mode = HNS3_FC_FULL;
 		break;
 	default:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
-			  "configured to RTE_FC_NONE", mode);
+			  "configured to RTE_ETH_FC_NONE", mode);
 		break;
 	}
 }
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972bf..64d1da09a707 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
 };
 
 static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
-	{ ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
 };
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	struct hns3_cmd_desc desc;
 	int ret;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
 		hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
 		return -EINVAL;
 	}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
 	rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	rte_spinlock_lock(&hw->lock);
 	rxmode = &dev->data->dev_conf.rxmode;
 	tmp_mask = (unsigned int)mask;
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* ignore vlan filter configuration during promiscuous mode */
 		if (!dev->data->promiscuous) {
 			/* Enable or disable VLAN filter */
-			enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+			enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
 				 true : false;
 
 			ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+		enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
 		    true : false;
 
 		ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+	ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
 				       RTE_ETHER_TYPE_VLAN);
 	if (ret) {
 		hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
 	if (!hw->data->promiscuous) {
 		/* restore vlan filter states */
 		offloads = hw->data->dev_conf.rxmode.offloads;
-		enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+		enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
 		ret = hns3_enable_vlan_filter(hns, enable);
 		if (ret) {
 			hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
 			  txmode->hw_vlan_reject_untagged);
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	ret = hns3_vlan_offload_set(dev, mask);
 	if (ret) {
 		hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2218,9 +2218,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 	int max_tc = 0;
 	int i;
 
-	if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
-	    (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
-	     tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+	    (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+	     tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
 		hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
 			 rx_mq_mode, tx_mq_mode);
 		return -EOPNOTSUPP;
@@ -2228,7 +2228,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 	dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		if (dcb_rx_conf->nb_tcs > pf->tc_max) {
 			hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
 				 dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2237,7 +2237,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 		if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
 		      dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
-			hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+			hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
 				 "nb_tcs(%d) != %d or %d in rx direction.",
 				 dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
 			return -EINVAL;
@@ -2380,7 +2380,7 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
 	uint16_t mtu;
 	int ret;
 
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME))
 		return 0;
 
 	/*
@@ -2440,11 +2440,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
 	 * configure link_speeds (default 0), which means auto-negotiation.
 	 * In this case, it should return success.
 	 */
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
 	    hw->mac.support_autoneg == 0)
 		return 0;
 
-	if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
 		ret = hns3_check_port_speed(hw, link_speeds);
 		if (ret)
 			return ret;
@@ -2504,15 +2504,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	if (ret)
 		goto cfg_err;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = hns3_setup_dcb(dev);
 		if (ret)
 			goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		hw->rss_dis_flag = false;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2533,7 +2533,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -2633,10 +2633,10 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (is_jumbo_frame)
 		dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 	rte_spinlock_unlock(&hw->lock);
 
@@ -2649,15 +2649,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 
 	return speed_capa;
 }
@@ -2668,19 +2668,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	return speed_capa;
 }
@@ -2699,7 +2699,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
 			hns3_get_firber_port_speed_capa(mac->supported_speed);
 
 	if (mac->support_autoneg == 0)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -2725,41 +2725,41 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_KEEP_CRC |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_JUMBO_FRAME |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_outer_udp_cksum_supported(hw))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_indep_txrx_supported(hw))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
 
 	if (hns3_dev_ptp_supported(hw))
-		info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	info->rx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = HNS3_MAX_RING_DESC,
@@ -2843,7 +2843,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
 
 	ret = hns3_update_link_info(eth_dev);
 	if (ret)
-		hw->mac.link_status = ETH_LINK_DOWN;
+		hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	return ret;
 }
@@ -2856,29 +2856,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link->link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link->link_speed = ETH_SPEED_NUM_NONE;
+		new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link->link_duplex = mac->link_duplex;
-	new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link->link_autoneg = mac->link_autoneg;
 }
 
@@ -2898,8 +2898,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 	if (eth_dev->data->dev_started == 0) {
 		new_link.link_autoneg = mac->link_autoneg;
 		new_link.link_duplex = mac->link_duplex;
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
-		new_link.link_status = ETH_LINK_DOWN;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		new_link.link_status = RTE_ETH_LINK_DOWN;
 		goto out;
 	}
 
@@ -2911,7 +2911,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 			break;
 		}
 
-		if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+		if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
 			break;
 
 		rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3257,31 +3257,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
 {
 	switch (speed_cmd) {
 	case HNS3_CFG_SPEED_10M:
-		*speed = ETH_SPEED_NUM_10M;
+		*speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case HNS3_CFG_SPEED_100M:
-		*speed = ETH_SPEED_NUM_100M;
+		*speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HNS3_CFG_SPEED_1G:
-		*speed = ETH_SPEED_NUM_1G;
+		*speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HNS3_CFG_SPEED_10G:
-		*speed = ETH_SPEED_NUM_10G;
+		*speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HNS3_CFG_SPEED_25G:
-		*speed = ETH_SPEED_NUM_25G;
+		*speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HNS3_CFG_SPEED_40G:
-		*speed = ETH_SPEED_NUM_40G;
+		*speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HNS3_CFG_SPEED_50G:
-		*speed = ETH_SPEED_NUM_50G;
+		*speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HNS3_CFG_SPEED_100G:
-		*speed = ETH_SPEED_NUM_100G;
+		*speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HNS3_CFG_SPEED_200G:
-		*speed = ETH_SPEED_NUM_200G;
+		*speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	default:
 		return -EINVAL;
@@ -3610,39 +3610,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 	hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
 
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
 		break;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
 		break;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
 		break;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
 		break;
@@ -4305,14 +4305,14 @@ hns3_mac_init(struct hns3_hw *hw)
 	int ret;
 
 	pf->support_sfp_query = true;
-	mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+	mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
 		return ret;
 	}
 
-	mac->link_status = ETH_LINK_DOWN;
+	mac->link_status = RTE_ETH_LINK_DOWN;
 
 	return hns3_config_mtu(hw, pf->mps);
 }
@@ -4562,7 +4562,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
 	 * all packets coming in in the receiving direction.
 	 */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, false);
 		if (ret) {
 			hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4603,7 +4603,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	}
 	/* when promiscuous mode was disabled, restore the vlan filter status */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, true);
 		if (ret) {
 			hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4723,8 +4723,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 		mac_info->supported_speed =
 					rte_le_to_cpu_32(resp->supported_speed);
 		mac_info->support_autoneg = resp->autoneg_ability;
-		mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
-					: ETH_LINK_AUTONEG;
+		mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+					: RTE_ETH_LINK_AUTONEG;
 	} else {
 		mac_info->query_type = HNS3_DEFAULT_QUERY;
 	}
@@ -4735,8 +4735,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 static uint8_t
 hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
 {
-	if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
-		duplex = ETH_LINK_FULL_DUPLEX;
+	if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return duplex;
 }
@@ -4786,7 +4786,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 		return ret;
 
 	/* Do nothing if no SFP */
-	if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+	if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
 		return 0;
 
 	/*
@@ -4813,7 +4813,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 
 	/* Config full duplex for SFP */
 	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
-				      ETH_LINK_FULL_DUPLEX);
+				      RTE_ETH_LINK_FULL_DUPLEX);
 }
 
 static void
@@ -4932,10 +4932,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
 	hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
 
 	/*
-	 * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+	 * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
 	 * when receiving frames. Otherwise, CRC will be stripped.
 	 */
-	if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
 	else
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4963,7 +4963,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret) {
 		hns3_err(hw, "get link status cmd failed %d", ret);
-		return ETH_LINK_DOWN;
+		return RTE_ETH_LINK_DOWN;
 	}
 
 	req = (struct hns3_link_status_cmd *)desc.data;
@@ -5145,19 +5145,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		return HNS3_FIBER_LINK_SPEED_1G_BIT;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		return HNS3_FIBER_LINK_SPEED_10G_BIT;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		return HNS3_FIBER_LINK_SPEED_25G_BIT;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		return HNS3_FIBER_LINK_SPEED_40G_BIT;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		return HNS3_FIBER_LINK_SPEED_50G_BIT;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		return HNS3_FIBER_LINK_SPEED_100G_BIT;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		return HNS3_FIBER_LINK_SPEED_200G_BIT;
 	default:
 		hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5395,20 +5395,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_10M:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_10M:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
 		break;
-	case ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
 		break;
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
 		break;
 	default:
@@ -5424,26 +5424,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_1G:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
 		break;
 	default:
@@ -5478,28 +5478,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
 static inline uint32_t
 hns3_get_link_speed(uint32_t link_speeds)
 {
-	uint32_t speed = ETH_SPEED_NUM_NONE;
-
-	if (link_speeds & ETH_LINK_SPEED_10M ||
-	    link_speeds & ETH_LINK_SPEED_10M_HD)
-		speed = ETH_SPEED_NUM_10M;
-	if (link_speeds & ETH_LINK_SPEED_100M ||
-	    link_speeds & ETH_LINK_SPEED_100M_HD)
-		speed = ETH_SPEED_NUM_100M;
-	if (link_speeds & ETH_LINK_SPEED_1G)
-		speed = ETH_SPEED_NUM_1G;
-	if (link_speeds & ETH_LINK_SPEED_10G)
-		speed = ETH_SPEED_NUM_10G;
-	if (link_speeds & ETH_LINK_SPEED_25G)
-		speed = ETH_SPEED_NUM_25G;
-	if (link_speeds & ETH_LINK_SPEED_40G)
-		speed = ETH_SPEED_NUM_40G;
-	if (link_speeds & ETH_LINK_SPEED_50G)
-		speed = ETH_SPEED_NUM_50G;
-	if (link_speeds & ETH_LINK_SPEED_100G)
-		speed = ETH_SPEED_NUM_100G;
-	if (link_speeds & ETH_LINK_SPEED_200G)
-		speed = ETH_SPEED_NUM_200G;
+	uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+	if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+		speed = RTE_ETH_SPEED_NUM_10M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+		speed = RTE_ETH_SPEED_NUM_100M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+		speed = RTE_ETH_SPEED_NUM_1G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+		speed = RTE_ETH_SPEED_NUM_10G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+		speed = RTE_ETH_SPEED_NUM_25G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+		speed = RTE_ETH_SPEED_NUM_40G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+		speed = RTE_ETH_SPEED_NUM_50G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+		speed = RTE_ETH_SPEED_NUM_100G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+		speed = RTE_ETH_SPEED_NUM_200G;
 
 	return speed;
 }
@@ -5507,11 +5507,11 @@ hns3_get_link_speed(uint32_t link_speeds)
 static uint8_t
 hns3_get_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-	    (link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+	    (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 static int
@@ -5645,9 +5645,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
 	struct hns3_set_link_speed_cfg cfg;
 
 	memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
-	cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
-			ETH_LINK_AUTONEG : ETH_LINK_FIXED;
-	if (cfg.autoneg != ETH_LINK_AUTONEG) {
+	cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+			RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+	if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
 		cfg.speed = hns3_get_link_speed(conf->link_speeds);
 		cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
 	}
@@ -5920,7 +5920,7 @@ hns3_do_stop(struct hns3_adapter *hns)
 	ret = hns3_cfg_mac_mode(hw, false);
 	if (ret)
 		return ret;
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
 		hns3_configure_all_mac_addr(hns, true);
@@ -6131,17 +6131,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	current_mode = hns3_get_current_fc_mode(dev);
 	switch (current_mode) {
 	case HNS3_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case HNS3_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HNS3_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case HNS3_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	}
 
@@ -6287,7 +6287,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
 	int i;
 
 	rte_spinlock_lock(&hw->lock);
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = pf->local_max_tc;
 	else
 		dcb_info->nb_tcs = 1;
@@ -6587,7 +6587,7 @@ hns3_stop_service(struct hns3_adapter *hns)
 	struct rte_eth_dev *eth_dev;
 
 	eth_dev = &rte_eth_devices[hw->data->port_id];
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 	if (hw->adapter_state == HNS3_NIC_STARTED) {
 		rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
 		hns3_update_linkstatus_and_event(hw, false);
@@ -6877,7 +6877,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
 	 * in device of link speed
 	 * below 10 Gbps.
 	 */
-	if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+	if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
 		*state = 0;
 		return 0;
 	}
@@ -6909,7 +6909,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
 	 * configured FEC mode is returned.
 	 * If link is up, current FEC mode is returned.
 	 */
-	if (hw->mac.link_status == ETH_LINK_DOWN) {
+	if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
 		ret = get_current_fec_auto_state(hw, &auto_state);
 		if (ret)
 			return ret;
@@ -7008,12 +7008,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
 	uint32_t cur_capa;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		cur_capa = fec_capa[1].capa;
 		break;
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		cur_capa = fec_capa[0].capa;
 		break;
 	default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 0e4e4269a12f..c40d28af1d46 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -191,10 +191,10 @@ struct hns3_mac {
 	bool default_addr_setted; /* whether default addr(mac_addr) is set */
 	uint8_t media_type;
 	uint8_t phy_addr;
-	uint8_t link_duplex  : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
-	uint8_t link_status  : 1; /* ETH_LINK_[DOWN/UP] */
-	uint32_t link_speed;      /* ETH_SPEED_NUM_ */
+	uint8_t link_duplex  : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint8_t link_status  : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;      /* RTE_ETH_SPEED_NUM_ */
 	/*
 	 * Some firmware versions support only the SFP speed query. In addition
 	 * to the SFP speed query, some firmware supports the query of the speed
@@ -1114,9 +1114,9 @@ static inline uint64_t
 hns3_txvlan_cap_get(struct hns3_hw *hw)
 {
 	if (hw->port_base_vlan_cfg.state)
-		return DEV_TX_OFFLOAD_VLAN_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	else
-		return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 }
 
 #endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c806..53d79bb2106c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -809,15 +809,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	hw->adapter_state = HNS3_NIC_CONFIGURING;
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		hns3_err(hw, "setting link speed/duplex not supported");
 		ret = -EINVAL;
 		goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		hw->rss_dis_flag = false;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -829,7 +829,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	 * If jumbo frames are enabled, MTU needs to be refreshed
 	 * according to the maximum RX packet length.
 	 */
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
 		if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
 		    max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
@@ -853,7 +853,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -931,10 +931,10 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	}
 	if (mtu > RTE_ETHER_MTU)
 		dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 	rte_spinlock_unlock(&hw->lock);
 
@@ -963,33 +963,33 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
 
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_JUMBO_FRAME |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_outer_udp_cksum_supported(hw))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_indep_txrx_supported(hw))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1669,10 +1669,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	tmp_mask = (unsigned int)mask;
 
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN filter */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = hns3vf_en_vlan_filter(hw, true);
 		else
 			ret = hns3vf_en_vlan_filter(hw, false);
@@ -1682,10 +1682,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Vlan stripping setting */
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ret = hns3vf_en_hw_strip_rxvtag(hw, true);
 		else
 			ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1753,7 +1753,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
 	int ret;
 
 	dev_conf = &hw->data->dev_conf;
-	en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+	en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
 								   : false;
 	ret = hns3vf_en_hw_strip_rxvtag(hw, en);
 	if (ret)
@@ -1778,8 +1778,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
 	}
 
 	/* Apply vlan offload setting */
-	ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK);
+	ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
 
@@ -2088,7 +2088,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	/*
 	 * The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2247,31 +2247,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	memset(&new_link, 0, sizeof(new_link));
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link.link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link.link_duplex = mac->link_duplex;
-	new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg =
-	    !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+	    !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
 }
@@ -2599,11 +2599,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
 		 * Make sure call update link status before hns3vf_stop_poll_job
 		 * because update link status depend on polling job exist.
 		 */
-		hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+		hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
 					  hw->mac.link_duplex);
 		hns3vf_stop_poll_job(eth_dev);
 	}
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	hns3_set_rxtx_function(eth_dev);
 	rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index fc77979c5f14..0ac8705b590b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
 	 * Kunpeng930 and future kunpeng series support to use src/dst port
 	 * fields to RSS hash for IPv6 SCTP packet type.
 	 */
-	if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
-	    (rss->types & ETH_RSS_IP ||
+	if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+	    (rss->types & RTE_ETH_RSS_IP ||
 	    (!hw->rss_info.ipv6_sctp_offload_supported &&
-	    rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+	    rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 		return false;
 
 	return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index df8485904688..395590c86c03 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		return 0;
 
 	ret = rte_mbuf_dyn_rx_timestamp_register
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..2c5661567945 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_tuple_table[] = {
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
 };
 
@@ -146,44 +146,44 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_rss_types[] = {
-	{ ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+	{ RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+	{ RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
 };
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 	 * When user does not specify the following types or a combination of
 	 * the following types, it enables all fields for the supported RSS
 	 * types. the following types as:
-	 * - ETH_RSS_L3_SRC_ONLY
-	 * - ETH_RSS_L3_DST_ONLY
-	 * - ETH_RSS_L4_SRC_ONLY
-	 * - ETH_RSS_L4_DST_ONLY
+	 * - RTE_ETH_RSS_L3_SRC_ONLY
+	 * - RTE_ETH_RSS_L3_DST_ONLY
+	 * - RTE_ETH_RSS_L4_SRC_ONLY
+	 * - RTE_ETH_RSS_L4_DST_ONLY
 	 */
 	if (fields_count == 0) {
 		for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	}
 
 	/* When RSS is off, redirect the packet queue 0 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
 		hns3_rss_uninit(hns);
 
 	/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	 * When RSS is off, it doesn't need to configure rss redirection table
 	 * to hardware.
 	 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
 					       hw->rss_ind_tbl_size);
 		if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	return ret;
 
 rss_indir_table_uninit:
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret1 = hns3_rss_reset_indir_table(hw);
 		if (ret1 != 0)
 			return ret;
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
 #include <rte_flow.h>
 
 #define HNS3_ETH_RSS_SUPPORT ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L3_SRC_ONLY | \
-	ETH_RSS_L3_DST_ONLY | \
-	ETH_RSS_L4_SRC_ONLY | \
-	ETH_RSS_L4_DST_ONLY)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L3_SRC_ONLY | \
+	RTE_ETH_RSS_L3_DST_ONLY | \
+	RTE_ETH_RSS_L4_SRC_ONLY | \
+	RTE_ETH_RSS_L4_DST_ONLY)
 
 #define HNS3_RSS_IND_TBL_SIZE	512 /* The size of hash lookup table */
 #define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 0f222b37f9d1..01e43791572b 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1912,7 +1912,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
 
 	/* CRC len set here is used for amending packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
 						 rxq->rx_buf_len);
 	}
 
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 	    dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
 		dev->data->scattered_rx = true;
 }
@@ -2833,7 +2833,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
 	vec_allowed = vec_support && hns3_get_default_vec_support();
 	sve_allowed = vec_support && hns3_get_sve_support();
 	simple_allowed = !dev->data->scattered_rx &&
-			 (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+			 (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
 
 	if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
 		return hns3_recv_pkts_vec;
@@ -3127,7 +3127,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 	int ret;
 
 	offloads = hw->data->dev_conf.rxmode.offloads;
-	gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4279,7 +4279,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
 	if (hns3_dev_ptp_supported(hw))
 		return false;
 
-	return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+	return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
 }
 
 static bool
@@ -4291,16 +4291,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
 	return true;
 #else
 #define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
-		DEV_TX_OFFLOAD_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_CKSUM | \
-		DEV_TX_OFFLOAD_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_SCTP_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
 
 	uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
 	if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0c8..2fa3a01dd3bf 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
 	uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
 	uint16_t rx_rearm_nb;    /* number of remaining BDs to be re-armed */
 
-	/* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+	/* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
 	uint8_t crc_len;
 
 	/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index 844512f6ceec..d01a8d62bfb1 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
 	if (hns3_dev_ptp_supported(hw))
 		return -ENOTSUP;
 
-	/* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
-	if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	/* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+	if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		return -ENOTSUP;
 
 	return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
 int
 hns3_rx_check_vec_support(struct rte_eth_dev *dev)
 {
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
-				 DEV_RX_OFFLOAD_VLAN;
+	uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				 RTE_ETH_RX_OFFLOAD_VLAN;
 
 	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if (hns3_dev_ptp_supported(hw))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed17a..0d9ebf208614 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1641,7 +1641,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 
 	/* Set the global registers with default ether type value */
 	if (!pf->support_multi_driver) {
-		ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+		ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					 RTE_ETHER_TYPE_VLAN);
 		if (ret != I40E_SUCCESS) {
 			PMD_INIT_LOG(ERR,
@@ -1909,8 +1909,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	ad->tx_simple_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Only legacy filter API needs the following fdir config. So when the
 	 * legacy filter API is deprecated, the following codes should also be
@@ -1944,13 +1944,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	 *  number, which will be available after rx_queue_setup(). dev_start()
 	 *  function is good to place RSS setup.
 	 */
-	if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
 		ret = i40e_vmdq_setup(dev);
 		if (ret)
 			goto err;
 	}
 
-	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = i40e_dcb_setup(dev);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2227,17 +2227,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
 {
 	uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
 
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed |= I40E_LINK_SPEED_40GB;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed |= I40E_LINK_SPEED_25GB;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed |= I40E_LINK_SPEED_20GB;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed |= I40E_LINK_SPEED_10GB;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed |= I40E_LINK_SPEED_1GB;
-	if (link_speeds & ETH_LINK_SPEED_100M)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 		link_speed |= I40E_LINK_SPEED_100MB;
 
 	return link_speed;
@@ -2345,13 +2345,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
 	abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
 		     I40E_AQ_PHY_LINK_ENABLED;
 
-	if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
-		conf->link_speeds = ETH_LINK_SPEED_40G |
-				    ETH_LINK_SPEED_25G |
-				    ETH_LINK_SPEED_20G |
-				    ETH_LINK_SPEED_10G |
-				    ETH_LINK_SPEED_1G |
-				    ETH_LINK_SPEED_100M;
+	if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+		conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+				    RTE_ETH_LINK_SPEED_25G |
+				    RTE_ETH_LINK_SPEED_20G |
+				    RTE_ETH_LINK_SPEED_10G |
+				    RTE_ETH_LINK_SPEED_1G |
+				    RTE_ETH_LINK_SPEED_100M;
 
 		abilities |= I40E_AQ_PHY_AN_ENABLED;
 	} else {
@@ -2910,34 +2910,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
 	/* Parse the link status */
 	switch (link_speed) {
 	case I40E_REG_SPEED_0:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_REG_SPEED_1:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_REG_SPEED_2:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_REG_SPEED_3:
 		if (hw->mac.type == I40E_MAC_X722) {
-			link->link_speed = ETH_SPEED_NUM_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		} else {
 			reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
 
 			if (reg_val & I40E_REG_MACC_25GB)
-				link->link_speed = ETH_SPEED_NUM_25G;
+				link->link_speed = RTE_ETH_SPEED_NUM_25G;
 			else
-				link->link_speed = ETH_SPEED_NUM_40G;
+				link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		}
 		break;
 	case I40E_REG_SPEED_4:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
-			link->link_speed = ETH_SPEED_NUM_20G;
+			link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2964,8 +2964,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 		status = i40e_aq_get_link_info(hw, enable_lse,
 						&link_status, NULL);
 		if (unlikely(status != I40E_SUCCESS)) {
-			link->link_speed = ETH_SPEED_NUM_NONE;
-			link->link_duplex = ETH_LINK_FULL_DUPLEX;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			return;
 		}
@@ -2980,28 +2980,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case I40E_LINK_SPEED_100MB:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_LINK_SPEED_1GB:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_LINK_SPEED_20GB:
-		link->link_speed = ETH_SPEED_NUM_20G;
+		link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case I40E_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case I40E_LINK_SPEED_40GB:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	default:
 		if (link->link_status)
-			link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		else
-			link->link_speed = ETH_SPEED_NUM_NONE;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 }
@@ -3018,9 +3018,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
 	memset(&link, 0, sizeof(link));
 
 	/* i40e uses full duplex only */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	if (!wait_to_complete && !enable_lse)
 		update_link_reg(hw, &link);
@@ -3748,34 +3748,34 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_RSS_HASH;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3834,7 +3834,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
 		/* For XL710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_40G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
 		dev_info->default_rxportconf.nb_queues = 2;
 		dev_info->default_txportconf.nb_queues = 2;
 		if (dev->data->nb_rx_queues == 1)
@@ -3848,17 +3848,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
 		/* For XXV710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_25G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
 		dev_info->default_rxportconf.ring_size = 256;
 		dev_info->default_txportconf.ring_size = 256;
 	} else {
 		/* For X710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
-		if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+		if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
 			dev_info->default_rxportconf.ring_size = 512;
 			dev_info->default_txportconf.ring_size = 256;
 		} else {
@@ -3897,7 +3897,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
 	int ret;
 
 	if (qinq) {
-		if (vlan_type == ETH_VLAN_TYPE_OUTER)
+		if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 			reg_id = 2;
 	}
 
@@ -3944,12 +3944,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	int ret = 0;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER) ||
-	    (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+	    (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -3963,12 +3963,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	/* 802.1ad frames ability is added in NVM API 1.7*/
 	if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
 		if (qinq) {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->first_tag = rte_cpu_to_le_16(tpid);
-			else if (vlan_type == ETH_VLAN_TYPE_INNER)
+			else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		} else {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		}
 		ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -4027,37 +4027,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_stripping(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			i40e_vsi_config_double_vlan(vsi, TRUE);
 			/* Set global registers with default ethertype. */
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					   RTE_ETHER_TYPE_VLAN);
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					   RTE_ETHER_TYPE_VLAN);
 		}
 		else
 			i40e_vsi_config_double_vlan(vsi, FALSE);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
 		/* Enable or disable outer VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4140,17 +4140,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	 /* Return current mode according to actual setting*/
 	switch (hw->fc.current_mode) {
 	case I40E_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case I40E_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case I40E_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case I40E_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	};
 
 	return 0;
@@ -4166,10 +4166,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	struct i40e_hw *hw;
 	struct i40e_pf *pf;
 	enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
-		[RTE_FC_NONE] = I40E_FC_NONE,
-		[RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
-		[RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
-		[RTE_FC_FULL] = I40E_FC_FULL
+		[RTE_ETH_FC_NONE] = I40E_FC_NONE,
+		[RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+		[RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+		[RTE_ETH_FC_FULL] = I40E_FC_FULL
 	};
 
 	/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4316,7 +4316,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
 	}
 
 	rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
 	else
 		mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4469,7 +4469,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4512,7 +4512,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4847,7 +4847,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
 				hw->func_caps.num_vsis - vsi_count);
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
-				ETH_64_POOLS);
+				RTE_ETH_64_POOLS);
 			if (pf->max_nb_vmdq_vsi) {
 				pf->flags |= I40E_FLAG_VMDQ;
 				pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6132,10 +6132,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
 	int mask = 0;
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK |
-	       ETH_QINQ_STRIP_MASK |
-	       ETH_VLAN_FILTER_MASK |
-	       ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+	       RTE_ETH_QINQ_STRIP_MASK |
+	       RTE_ETH_VLAN_FILTER_MASK |
+	       RTE_ETH_VLAN_EXTEND_MASK;
 	ret = i40e_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6262,9 +6262,9 @@ i40e_pf_setup(struct i40e_pf *pf)
 
 	/* Configure filter control */
 	memset(&settings, 0, sizeof(settings));
-	if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+	if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
-	else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+	else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
 	else {
 		PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7117,7 +7117,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
 {
 	uint32_t vid_idx, vid_bit;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return 0;
 
 	vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7152,7 +7152,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
 	struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
 	int ret;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return;
 
 	i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -8730,16 +8730,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8765,12 +8765,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8862,7 +8862,7 @@ int
 i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 {
 	struct i40e_hw *hw = &pf->adapter->hw;
-	uint8_t lut[ETH_RSS_RETA_SIZE_512];
+	uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 	uint32_t i;
 	int num;
 
@@ -8870,7 +8870,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 	 * configured. It's necessary to calculate the actual PF
 	 * queues that are configured.
 	 */
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		num = i40e_pf_calc_configured_queues_num(pf);
 	else
 		num = pf->dev_data->nb_rx_queues;
@@ -8949,7 +8949,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
 	rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
 	mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
 	if (!(rss_hf & pf->adapter->flow_types_mask) ||
-	    !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+	    !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return 0;
 
 	hw = I40E_PF_TO_HW(pf);
@@ -10412,8 +10412,8 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
 		return I40E_ERR_NO_MEMORY;
 	}
 	switch (mirror_conf->rule_type) {
-	case ETH_MIRROR_VLAN:
-		for (i = 0, j = 0; i < ETH_MIRROR_MAX_VLANS; i++) {
+	case RTE_ETH_MIRROR_VLAN:
+		for (i = 0, j = 0; i < RTE_ETH_MIRROR_MAX_VLANS; i++) {
 			if (mirror_conf->vlan.vlan_mask & (1ULL << i)) {
 				mirr_rule->entries[j] =
 					mirror_conf->vlan.vlan_id[i];
@@ -10427,8 +10427,8 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
 		}
 		mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_VLAN;
 		break;
-	case ETH_MIRROR_VIRTUAL_POOL_UP:
-	case ETH_MIRROR_VIRTUAL_POOL_DOWN:
+	case RTE_ETH_MIRROR_VIRTUAL_POOL_UP:
+	case RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN:
 		/* check if the specified pool bit is out of range */
 		if (mirror_conf->pool_mask > (uint64_t)(1ULL << (pf->vf_num + 1))) {
 			PMD_DRV_LOG(ERR, "pool mask is out of range.");
@@ -10453,15 +10453,15 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
 		}
 		/* egress and ingress in aq commands means from switch but not port */
 		mirr_rule->rule_type =
-			(mirror_conf->rule_type == ETH_MIRROR_VIRTUAL_POOL_UP) ?
+			(mirror_conf->rule_type == RTE_ETH_MIRROR_VIRTUAL_POOL_UP) ?
 			I40E_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS :
 			I40E_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS;
 		break;
-	case ETH_MIRROR_UPLINK_PORT:
+	case RTE_ETH_MIRROR_UPLINK_PORT:
 		/* egress and ingress in aq commands means from switch but not port*/
 		mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS;
 		break;
-	case ETH_MIRROR_DOWNLINK_PORT:
+	case RTE_ETH_MIRROR_DOWNLINK_PORT:
 		mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS;
 		break;
 	default:
@@ -10603,16 +10603,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_25G:
 		tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
 		break;
@@ -10840,7 +10840,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
 	else
 		*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
 
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_cfg->pfc.willing = 0;
 		dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
 		dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11348,7 +11348,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint16_t bsf, tc_mapping;
 	int i, j = 0;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
 	else
 		dcb_info->nb_tcs = 1;
@@ -11396,7 +11396,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 				dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
 		}
 		j++;
-	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
 	return 0;
 }
 
@@ -11774,10 +11774,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > I40E_ETH_MAX_LEN)
 		dev_data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev_data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60b3..f21c2de6bdb9 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -139,17 +139,17 @@ enum i40e_flxpld_layer_idx {
 		       I40E_FLAG_RSS_AQ_CAPABLE)
 
 #define I40E_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /* All bits of RSS hash enable for X722*/
 #define I40E_RSS_HENA_ALL_X722 ( \
@@ -1076,7 +1076,7 @@ struct i40e_rte_flow_rss_conf {
 	uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
 		     I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
 		    sizeof(uint32_t)];		/**< Hash key. */
-	uint16_t queue[ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
+	uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
 
 	bool symmetric_enable;		/**< true, if enable symmetric */
 	uint64_t config_pctypes;	/**< All PCTYPES with the flow  */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b227..192e7234909f 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1077,7 +1077,7 @@ i40evf_add_vlan(struct rte_eth_dev *dev, uint16_t vlanid)
 	 * VLAN_STRIP by default. So reconfigure the vlan_offload
 	 * as it was done by the app earlier.
 	 */
-	err = i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
+	err = i40evf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK);
 	if (err)
 		PMD_DRV_LOG(ERR, "fail to set vlan_strip");
 
@@ -1403,28 +1403,28 @@ i40evf_handle_pf_event(struct rte_eth_dev *dev, uint8_t *msg,
 				pf_msg->event_data.link_event_adv.link_status;
 
 			switch (pf_msg->event_data.link_event_adv.link_speed) {
-			case ETH_SPEED_NUM_100M:
+			case RTE_ETH_SPEED_NUM_100M:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_100MB;
 				break;
-			case ETH_SPEED_NUM_1G:
+			case RTE_ETH_SPEED_NUM_1G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_1GB;
 				break;
-			case ETH_SPEED_NUM_2_5G:
+			case RTE_ETH_SPEED_NUM_2_5G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_2_5GB;
 				break;
-			case ETH_SPEED_NUM_5G:
+			case RTE_ETH_SPEED_NUM_5G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_5GB;
 				break;
-			case ETH_SPEED_NUM_10G:
+			case RTE_ETH_SPEED_NUM_10G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_10GB;
 				break;
-			case ETH_SPEED_NUM_20G:
+			case RTE_ETH_SPEED_NUM_20G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_20GB;
 				break;
-			case ETH_SPEED_NUM_25G:
+			case RTE_ETH_SPEED_NUM_25G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_25GB;
 				break;
-			case ETH_SPEED_NUM_40G:
+			case RTE_ETH_SPEED_NUM_40G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_40GB;
 				break;
 			default:
@@ -1770,7 +1770,7 @@ static int
 i40evf_init_vlan(struct rte_eth_dev *dev)
 {
 	/* Apply vlan offload setting */
-	i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
+	i40evf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK);
 
 	return 0;
 }
@@ -1785,9 +1785,9 @@ i40evf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			i40evf_enable_vlan_strip(dev);
 		else
 			i40evf_disable_vlan_strip(dev);
@@ -1933,7 +1933,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
 	/**
 	 * Check if the jumbo frame and maximum packet length are set correctly
 	 */
-	if (dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
 		    rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -1954,7 +1954,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
 		}
 	}
 
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    rxq->max_pkt_len > buf_size)
 		dev_data->scattered_rx = 1;
 
@@ -2290,35 +2290,35 @@ i40evf_dev_link_update(struct rte_eth_dev *dev,
 	/* Linux driver PF host */
 	switch (vf->link_speed) {
 	case I40E_LINK_SPEED_100MB:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_LINK_SPEED_1GB:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_LINK_SPEED_10GB:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_LINK_SPEED_20GB:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case I40E_LINK_SPEED_25GB:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case I40E_LINK_SPEED_40GB:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	default:
 		if (vf->link_up)
-			new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		else
-			new_link.link_speed = ETH_SPEED_NUM_NONE;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 	/* full duplex only */
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = vf->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg =
-		!(dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+		!(dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -2367,36 +2367,36 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = dev_info->max_rx_pktlen - I40E_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
 	dev_info->flow_type_rss_offloads = vf->adapter->flow_types_mask;
 	dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	dev_info->tx_queue_offload_capa = 0;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -2596,10 +2596,10 @@ i40evf_dev_rss_reta_update(struct rte_eth_dev *dev,
 	uint16_t i, idx, shift;
 	int ret;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_64) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_64) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number of hardware can "
-			"support (%d)", reta_size, ETH_RSS_RETA_SIZE_64);
+			"support (%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_64);
 		return -EINVAL;
 	}
 
@@ -2635,10 +2635,10 @@ i40evf_dev_rss_reta_query(struct rte_eth_dev *dev,
 	uint8_t *lut;
 	int ret;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_64) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_64) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number of hardware can "
-			"support (%d)", reta_size, ETH_RSS_RETA_SIZE_64);
+			"support (%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_64);
 		return -EINVAL;
 	}
 
@@ -2770,7 +2770,7 @@ i40evf_config_rss(struct i40e_vf *vf)
 	uint8_t *lut_info;
 	int ret;
 
-	if (vf->dev_data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (vf->dev_data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		i40evf_disable_rss(vf);
 		PMD_DRV_LOG(DEBUG, "RSS not configured");
 		return 0;
@@ -2887,10 +2887,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > I40E_ETH_MAX_LEN)
 		dev_data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev_data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
 	return ret;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c47..d1cb992be61d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
 {
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_VLAN_EXTEND;
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	uint64_t reg_r = 0;
 	uint16_t reg_id;
 	uint16_t tpid;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfcc6..3755d4d3fe2a 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -102,47 +102,47 @@ struct i40e_hash_map_rss_inset {
 
 const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
 	/* IPv4 */
-	{ ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
-	{ ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* IPv6 */
-	{ ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
-	{ ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* Port */
-	{ ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+	{ RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
 	/* Ether */
-	{ ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
-	{ ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+	{ RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+	{ RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
 
 	/* VLAN */
-	{ ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
-	{ ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+	{ RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+	{ RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
 };
 
 #define I40E_HASH_VOID_NEXT_ALLOW	BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -201,30 +201,30 @@ struct i40e_hash_match_pattern {
 #define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
 	pattern, rss_mask, true, cus_pctype }
 
-#define I40E_HASH_L2_RSS_MASK		(ETH_RSS_VLAN | ETH_RSS_ETH | \
-					ETH_RSS_L2_SRC_ONLY | \
-					ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK		(RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+					RTE_ETH_RSS_L2_SRC_ONLY | \
+					RTE_ETH_RSS_L2_DST_ONLY)
 
 #define I40E_HASH_L23_RSS_MASK		(I40E_HASH_L2_RSS_MASK | \
-					ETH_RSS_L3_SRC_ONLY | \
-					ETH_RSS_L3_DST_ONLY)
+					RTE_ETH_RSS_L3_SRC_ONLY | \
+					RTE_ETH_RSS_L3_DST_ONLY)
 
-#define I40E_HASH_IPV4_L23_RSS_MASK	(ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK	(ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK	(RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK	(RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
 
 #define I40E_HASH_L234_RSS_MASK		(I40E_HASH_L23_RSS_MASK | \
-					ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
-					ETH_RSS_L4_DST_ONLY)
+					RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+					RTE_ETH_RSS_L4_DST_ONLY)
 
-#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
 
-#define I40E_HASH_L4_TYPES		(ETH_RSS_NONFRAG_IPV4_TCP | \
-					ETH_RSS_NONFRAG_IPV4_UDP | \
-					ETH_RSS_NONFRAG_IPV4_SCTP | \
-					ETH_RSS_NONFRAG_IPV6_TCP | \
-					ETH_RSS_NONFRAG_IPV6_UDP | \
-					ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES		(RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* Current supported patterns and RSS types.
  * All items that have the same pattern types are together.
@@ -232,68 +232,68 @@ struct i40e_hash_match_pattern {
 static const struct i40e_hash_match_pattern match_patterns[] = {
 	/* Ether */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
-			      ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+			      RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
 			      I40E_FILTER_PCTYPE_L2_PAYLOAD),
 
 	/* IPv4 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV4),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_NONFRAG_IPV4_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
 			      I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
-			      ETH_RSS_NONFRAG_IPV4_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_TCP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
-			      ETH_RSS_NONFRAG_IPV4_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_UDP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
-			      ETH_RSS_NONFRAG_IPV4_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
 
 	/* IPv6 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV6),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_NONFRAG_IPV6_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			      I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
-			      ETH_RSS_NONFRAG_IPV6_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_TCP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
-			      ETH_RSS_NONFRAG_IPV6_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
-			      ETH_RSS_NONFRAG_IPV6_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
 
 	/* ESP */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
 
 	/* GTPC */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -308,27 +308,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
 				  I40E_HASH_IPV4_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
 				  I40E_HASH_IPV6_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 
 	/* L2TPV3 */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
 
 	/* AH */
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV4),
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV6),
 };
 
@@ -564,29 +564,29 @@ i40e_hash_get_inset(uint64_t rss_types)
 	/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
 	 * it is the same case as none of them are added.
 	 */
-	mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
-	if (mask == ETH_RSS_L2_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
 		inset &= ~I40E_INSET_DMAC;
-	else if (mask == ETH_RSS_L2_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
 		inset &= ~I40E_INSET_SMAC;
 
-	mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
-	if (mask == ETH_RSS_L3_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
 		inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
-	else if (mask == ETH_RSS_L3_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
 		inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
 
-	mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
-	if (mask == ETH_RSS_L4_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
 		inset &= ~I40E_INSET_DST_PORT;
-	else if (mask == ETH_RSS_L4_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
 		inset &= ~I40E_INSET_SRC_PORT;
 
 	if (rss_types & I40E_HASH_L4_TYPES) {
 		uint64_t l3_mask = rss_types &
-				   (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+				   (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 		uint64_t l4_mask = rss_types &
-				   (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+				   (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 		if (l3_mask && !l4_mask)
 			inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -825,7 +825,7 @@ i40e_hash_config(struct i40e_pf *pf,
 
 	/* Update lookup table */
 	if (rss_info->queue_num > 0) {
-		uint8_t lut[ETH_RSS_RETA_SIZE_512];
+		uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 		uint32_t i, j = 0;
 
 		for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -932,7 +932,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
 			    "RSS key is ignored when queues specified");
 
 	pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		max_queue = i40e_pf_calc_configured_queues_num(pf);
 	else
 		max_queue = pf->dev_data->nb_rx_queues;
@@ -1070,22 +1070,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
 	uint64_t type, mask;
 
 	/* Validate L2 */
-	type = ETH_RSS_ETH & rss_types;
-	mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+	type = RTE_ETH_RSS_ETH & rss_types;
+	mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L3 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-	       ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
-	       ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
-	mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+	       RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+	       RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+	mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L4 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
-	mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+	mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 	event.event_data.link_event.link_status =
 		dev->data->dev_link.link_status;
 
-	/* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+	/* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
 	switch (dev->data->dev_link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
 		break;
 	default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 8329cbdd4e30..3bad4052ed1b 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	for (i = 0; i < tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		if (k) {
 			for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
 				for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -2005,7 +2005,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->queue_id = queue_idx;
 	rxq->reg_idx = reg_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2265,7 +2265,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
 	}
 	/* check simple tx conflict */
 	if (ad->tx_simple_allowed) {
-		if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+		if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
 				txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
 			PMD_DRV_LOG(ERR, "No-simple tx is required.");
 			return -EINVAL;
@@ -2925,7 +2925,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
 	rxq->max_pkt_len =
 		RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
 			rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
-	if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
 			rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must "
@@ -3441,7 +3441,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		 (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		 (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		 txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
 	ad->tx_vec_allowed = (ad->tx_simple_allowed &&
 			txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e857..303a4db47dbd 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
 	bool rx_deferred_start; /**< don't start this queue in dev start */
 	uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
 	uint8_t dcb_tc;         /**< Traffic class of rx queue */
-	uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
 };
 
 struct i40e_tx_entry {
@@ -165,7 +165,7 @@ struct i40e_tx_queue {
 	bool q_set; /**< indicate if tx queue has been configured */
 	bool tx_deferred_start; /**< don't start this queue in dev start */
 	uint8_t dcb_tc;         /**< Traffic class of tx queue */
-	uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /** Offload features */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d6422394..5f00d43950aa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -899,7 +899,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52ed98d62d0..0192164c35fa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	  */
 	txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < n; i++) {
 			free[i] = txep[i].mbuf;
 			txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 	struct i40e_adapter *ad =
 		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc, i;
 	bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 		return -1;
 
 	 /* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	/* no QinQ support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 
 	/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b5538132..6d90b0f3511b 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
 	dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
 		sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
 	dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
 		return -EINVAL;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			return i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_STRIP)
+		    RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			return i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index b3bd07811198..1d4383e89327 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -48,18 +48,18 @@
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
 #define IAVF_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |         \
-	ETH_RSS_NONFRAG_IPV4_TCP |  \
-	ETH_RSS_NONFRAG_IPV4_UDP |  \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |         \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 #define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 574cfe055e7c..94f6f4704b9c 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -265,53 +265,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	static const uint64_t map_hena_rss[] = {
 		/* IPv4 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
-				ETH_RSS_NONFRAG_IPV4_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
-				ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
 
 		/* IPv6 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
-				ETH_RSS_NONFRAG_IPV6_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
-				ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
 
 		/* L2 Payload */
-		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
 	};
 
-	const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
-				  ETH_RSS_NONFRAG_IPV4_TCP |
-				  ETH_RSS_NONFRAG_IPV4_SCTP |
-				  ETH_RSS_NONFRAG_IPV4_OTHER |
-				  ETH_RSS_FRAG_IPV4;
+	const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV4;
 
-	const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_NONFRAG_IPV6_SCTP |
-				  ETH_RSS_NONFRAG_IPV6_OTHER |
-				  ETH_RSS_FRAG_IPV6;
+	const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV6;
 
 	struct iavf_info *vf =  IAVF_DEV_PRIVATE_TO_VF(adapter);
 	uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -330,13 +330,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	/**
-	 * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
 	 * generalizations of all other IPv4 and IPv6 RSS types.
 	 */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		rss_hf |= ipv4_rss;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		rss_hf |= ipv6_rss;
 
 	RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -362,10 +362,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	if (valid_rss_hf & ipv4_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
 
 	if (valid_rss_hf & ipv6_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
 
 	if (rss_hf & ~valid_rss_hf)
 		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -466,7 +466,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
 		return 0;
 
 	enable = !!(dev->data->dev_conf.txmode.offloads &
-		    DEV_TX_OFFLOAD_VLAN_INSERT);
+		    RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	iavf_config_vlan_insert_v2(adapter, enable);
 
 	return 0;
@@ -478,10 +478,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
 	int err;
 
 	err = iavf_dev_vlan_offload_set(dev,
-					ETH_VLAN_STRIP_MASK |
-					ETH_QINQ_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK |
-					ETH_VLAN_EXTEND_MASK);
+					RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_QINQ_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK |
+					RTE_ETH_VLAN_EXTEND_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to update vlan offload");
 		return err;
@@ -511,8 +511,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_vec_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Large VF setting */
 	if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -585,7 +585,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
 	/* Check if the jumbo frame and maximum packet length are set
 	 * correctly.
 	 */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
 		    max_pkt_len > IAVF_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -608,7 +608,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    rxq->max_pkt_len > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -943,35 +943,35 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1031,42 +1031,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (vf->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = vf->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -1214,14 +1214,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
 	bool enable;
 	int err;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 
 		iavf_iterate_vlan_filters_v2(dev, enable);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		err = iavf_config_vlan_strip_v2(adapter, enable);
 		/* If not support, the stripping is already disabled by PF */
@@ -1250,9 +1250,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			err = iavf_enable_vlan_strip(adapter);
 		else
 			err = iavf_disable_vlan_strip(adapter);
@@ -1457,10 +1457,10 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > IAVF_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_JUMBO_FRAME;
+				RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+				~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
@@ -1564,7 +1564,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	ret = iavf_query_stats(adapter, &pstats);
 	if (ret == 0) {
 		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
-					 DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
 					 RTE_ETHER_CRC_LEN;
 		iavf_update_stats(vsi, pstats);
 		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 2b03dad8589c..1329a389f742 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -341,83 +341,83 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
 /* rss type super set */
 
 /* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4	(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4)
+#define IAVF_RSS_TYPE_OUTER_IPV4	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define IAVF_RSS_TYPE_OUTER_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 #define IAVF_RSS_TYPE_OUTER_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 #define IAVF_RSS_TYPE_OUTER_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 /* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6	(ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 #define IAVF_RSS_TYPE_OUTER_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 #define IAVF_RSS_TYPE_OUTER_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 /* VLAN IPV4 */
 #define IAVF_RSS_TYPE_VLAN_IPV4		(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define IAVF_RSS_TYPE_VLAN_IPV6		(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4	ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4	RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 /* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6	ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6	RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 /* GTPU IPv4 */
 #define IAVF_RSS_TYPE_GTPU_IPV4		(IAVF_RSS_TYPE_INNER_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_UDP	(IAVF_RSS_TYPE_INNER_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_TCP	(IAVF_RSS_TYPE_INNER_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define IAVF_RSS_TYPE_GTPU_IPV6		(IAVF_RSS_TYPE_INNER_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_UDP	(IAVF_RSS_TYPE_INNER_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_TCP	(IAVF_RSS_TYPE_INNER_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /**
  * Supported pattern for hash.
@@ -435,7 +435,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv4_udp,		IAVF_RSS_TYPE_VLAN_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_tcp,		IAVF_RSS_TYPE_VLAN_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_sctp,		IAVF_RSS_TYPE_VLAN_IPV4_SCTP,	&outer_ipv4_sctp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpu,			ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
+	{iavf_pattern_eth_ipv4_gtpu,			RTE_ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4,		IAVF_RSS_TYPE_GTPU_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_udp,		IAVF_RSS_TYPE_GTPU_IPV4_UDP,	&inner_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp,		IAVF_RSS_TYPE_GTPU_IPV4_TCP,	&inner_ipv4_tcp_tmplt},
@@ -477,9 +477,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv4_ah,			IAVF_RSS_TYPE_IPV4_AH,		&ipv4_ah_tmplt},
 	{iavf_pattern_eth_ipv4_l2tpv3,			IAVF_RSS_TYPE_IPV4_L2TPV3,	&ipv4_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv4_pfcp,			IAVF_RSS_TYPE_IPV4_PFCP,	&ipv4_pfcp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpc,			ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
-	{iavf_pattern_eth_ecpri,			ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
-	{iavf_pattern_eth_ipv4_ecpri,			ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_gtpc,			RTE_ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ecpri,			RTE_ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_ecpri,			RTE_ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4_tcp,	IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -497,7 +497,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv6_udp,		IAVF_RSS_TYPE_VLAN_IPV6_UDP,	&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_tcp,		IAVF_RSS_TYPE_VLAN_IPV6_TCP,	&outer_ipv6_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_sctp,		IAVF_RSS_TYPE_VLAN_IPV6_SCTP,	&outer_ipv6_sctp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpu,			ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
+	{iavf_pattern_eth_ipv6_gtpu,			RTE_ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6,		IAVF_RSS_TYPE_GTPU_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_udp,		IAVF_RSS_TYPE_GTPU_IPV6_UDP,	&inner_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp,		IAVF_RSS_TYPE_GTPU_IPV6_TCP,	&inner_ipv6_tcp_tmplt},
@@ -539,7 +539,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv6_ah,			IAVF_RSS_TYPE_IPV6_AH,		&ipv6_ah_tmplt},
 	{iavf_pattern_eth_ipv6_l2tpv3,			IAVF_RSS_TYPE_IPV6_L2TPV3,	&ipv6_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv6_pfcp,			IAVF_RSS_TYPE_IPV6_PFCP,	&ipv6_pfcp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpc,			ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ipv6_gtpc,			RTE_ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6_tcp,	IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -573,57 +573,57 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
 	struct virtchnl_rss_cfg rss_cfg;
 
 #define IAVF_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		rss_cfg.proto_hdrs = inner_ipv4_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		rss_cfg.proto_hdrs = inner_ipv6_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_FRAG_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_FRAG_IPV4) {
 		struct virtchnl_proto_hdrs hdr = {
 			.tunnel_level = TUNNEL_LEVEL_OUTER,
 			.count = 3,
@@ -641,7 +641,7 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_FRAG_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_FRAG_IPV6) {
 		struct virtchnl_proto_hdrs hdr = {
 			.tunnel_level = TUNNEL_LEVEL_OUTER,
 			.count = 3,
@@ -804,28 +804,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 		hdr = &proto_hdrs->proto_hdr[i];
 		switch (hdr->type) {
 		case VIRTCHNL_PROTO_HDR_ETH:
-			if (!(rss_type & ETH_RSS_ETH))
+			if (!(rss_type & RTE_ETH_RSS_ETH))
 				hdr->field_selector = 0;
-			else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_DST);
-			else if (rss_type & ETH_RSS_L2_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_SRC);
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4) {
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 					iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
-				} else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				}
@@ -835,11 +835,11 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4)
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
 					REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
 			} else {
 				hdr->field_selector = 0;
@@ -847,17 +847,17 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6:
 			if (rss_type &
-			    (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV6_TCP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			    (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				}
@@ -874,7 +874,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			}
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
 			else
 				hdr->field_selector = 0;
@@ -882,15 +882,15 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_UDP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
@@ -898,15 +898,15 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_TCP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
@@ -914,46 +914,46 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_SCTP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_SCTP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 			break;
 		case VIRTCHNL_PROTO_HDR_S_VLAN:
-			if (!(rss_type & ETH_RSS_S_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_S_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_C_VLAN:
-			if (!(rss_type & ETH_RSS_C_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_C_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_L2TPV3:
-			if (!(rss_type & ETH_RSS_L2TPV3))
+			if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ESP:
-			if (!(rss_type & ETH_RSS_ESP))
+			if (!(rss_type & RTE_ETH_RSS_ESP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_AH:
-			if (!(rss_type & ETH_RSS_AH))
+			if (!(rss_type & RTE_ETH_RSS_AH))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_PFCP:
-			if (!(rss_type & ETH_RSS_PFCP))
+			if (!(rss_type & RTE_ETH_RSS_PFCP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ECPRI:
-			if (!(rss_type & ETH_RSS_ECPRI))
+			if (!(rss_type & RTE_ETH_RSS_ECPRI))
 				hdr->field_selector = 0;
 			break;
 		default:
@@ -970,7 +970,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
 	struct virtchnl_proto_hdr *hdr;
 	int i;
 
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	for (i = 0; i < proto_hdrs->count; i++) {
@@ -1067,10 +1067,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -1081,27 +1081,27 @@ struct rss_attr_type {
 	uint64_t type;
 };
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE64)
 
 #define INVALID_RSS_ATTR	(RTE_ETH_RSS_L3_PRE32	| \
@@ -1111,9 +1111,9 @@ struct rss_attr_type {
 				 RTE_ETH_RSS_L3_PRE96)
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE64,				VALID_RSS_IPV6},
 	{INVALID_RSS_ATTR,				0}
@@ -1130,15 +1130,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index e33fe4576b6e..4ff856fc82aa 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -609,7 +609,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->vsi = vsi;
 	rxq->offloads = offloads;
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d633..096be81e8a69 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
 #define IAVF_VPMD_TX_MAX_FREE_BUF 64
 
 #define IAVF_TX_NO_VECTOR_FLAGS (				 \
-		DEV_TX_OFFLOAD_MULTI_SEGS |		 \
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define IAVF_TX_VECTOR_OFFLOAD (				 \
-		DEV_TX_OFFLOAD_VLAN_INSERT |		 \
-		DEV_TX_OFFLOAD_QINQ_INSERT |		 \
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		 \
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_TX_OFFLOAD_UDP_CKSUM |		 \
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define IAVF_RX_VECTOR_OFFLOAD (				 \
-		DEV_RX_OFFLOAD_CHECKSUM |		 \
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_RX_OFFLOAD_VLAN |		 \
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_VLAN |		 \
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IAVF_VECTOR_PATH 0
 #define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 475070e036ef..8f9a397e4143 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -904,7 +904,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH ||
+				RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 				rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh7 =
@@ -957,7 +957,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 					raw_desc_bh1, 1);
 
 			if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/**
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 571161c0cdec..2329928c62cb 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1138,7 +1138,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-			    DEV_RX_OFFLOAD_RSS_HASH ||
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 			    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
@@ -1191,7 +1191,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 						 raw_desc_bh1, 1);
 
 				if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-						DEV_RX_OFFLOAD_RSS_HASH) {
+						RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 					/**
 					 * to shift the 32b RSS hash value to the
 					 * highest 32b of each 128b before mask
@@ -1719,7 +1719,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e9055259b..58f928bdd7ca 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -818,7 +818,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH) {
+				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 4c2e0c7216fd..ec53478083b4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -807,7 +807,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 		PMD_DRV_LOG(DEBUG, "RSS is not supported");
 		return -ENOTSUP;
 	}
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
 		/* set all lut items to default queue */
 		memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da8759..6226aa5a80c2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -66,7 +66,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 	/* Check if the jumbo frame and maximum packet length are set
 	 * correctly.
 	 */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (max_pkt_len <= ICE_ETH_MAX_LEN ||
 		    max_pkt_len > ICE_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -89,7 +89,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -559,7 +559,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -620,7 +620,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 	}
 
 	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 
 	return 0;
@@ -635,8 +635,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	return 0;
 }
@@ -658,28 +658,28 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -896,42 +896,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (hw->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = hw->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -950,11 +950,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
 					udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
 					udp_tunnel->udp_port);
 		break;
@@ -981,8 +981,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e90a..0dac1b92bfdb 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -135,29 +135,29 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -239,9 +239,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		bool enable = !!(dev_conf->rxmode.offloads &
-				 DEV_RX_OFFLOAD_VLAN_STRIP);
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (enable && repr->outer_vlan_info.port_vlan_ena) {
 			PMD_DRV_LOG(ERR,
@@ -338,7 +338,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 	if (!ice_dcf_vlan_offload_ena(repr))
 		return -ENOTSUP;
 
-	if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer VLAN in QinQ\n");
 		return -EINVAL;
@@ -368,7 +368,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 
 	if (repr->outer_vlan_info.stripping_ena) {
 		err = ice_dcf_vf_repr_vlan_offload_set(dev,
-						       ETH_VLAN_STRIP_MASK);
+						       RTE_ETH_VLAN_STRIP_MASK);
 		if (err) {
 			PMD_DRV_LOG(ERR,
 				    "Failed to reset VLAN stripping : %d\n",
@@ -441,7 +441,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
 	int err;
 
 	err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
-					       ETH_VLAN_STRIP_MASK);
+					       RTE_ETH_VLAN_STRIP_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
 		return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954f1..d79cc549da19 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1449,9 +1449,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
 	TAILQ_INIT(&vsi->mac_list);
 	TAILQ_INIT(&vsi->vlan_list);
 
-	/* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+	/* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
 	pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
-			ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+			RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
 			hw->func_caps.common_cap.rss_table_size;
 	pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
 
@@ -2809,16 +2809,16 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	int ret;
 
 #define ICE_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_FRAG_IPV6)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV6)
 
 	ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
 	if (ret)
@@ -2828,7 +2828,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	cfg.symm = 0;
 	cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
 	/* Configure RSS for IPv4 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2838,7 +2838,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for IPv6 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2848,7 +2848,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -2859,7 +2859,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -2870,7 +2870,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -2881,7 +2881,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -2892,7 +2892,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -2903,7 +2903,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -2913,7 +2913,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -2923,7 +2923,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -2933,7 +2933,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -2943,7 +2943,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -2953,7 +2953,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -2963,7 +2963,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -2973,7 +2973,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_FRAG_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_FRAG_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_FRAG;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4 | BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2982,7 +2982,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_FRAG_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_FRAG_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_FRAG;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6 | BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3124,8 +3124,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_rx_queues) {
 		ret = ice_init_rss(pf);
@@ -3344,8 +3344,8 @@ ice_dev_start(struct rte_eth_dev *dev)
 	ice_set_rx_function(dev);
 	ice_set_tx_function(dev);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = ice_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3449,40 +3449,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->flow_type_rss_offloads = 0;
 
 	if (!is_safe_mode) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM |
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_QINQ_STRIP |
-			DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_VLAN_EXTEND |
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_QINQ_INSERT |
-			DEV_TX_OFFLOAD_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM |
-			DEV_TX_OFFLOAD_SCTP_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 		dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
 	}
 
 	dev_info->rx_queue_offload_capa = 0;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->reta_size = pf->hash_lut_size;
 	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3521,24 +3521,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_align = ICE_ALIGN_RING_DESC,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			       ETH_LINK_SPEED_100M |
-			       ETH_LINK_SPEED_1G |
-			       ETH_LINK_SPEED_2_5G |
-			       ETH_LINK_SPEED_5G |
-			       ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_20G |
-			       ETH_LINK_SPEED_25G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			       RTE_ETH_LINK_SPEED_100M |
+			       RTE_ETH_LINK_SPEED_1G |
+			       RTE_ETH_LINK_SPEED_2_5G |
+			       RTE_ETH_LINK_SPEED_5G |
+			       RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_20G |
+			       RTE_ETH_LINK_SPEED_25G;
 
 	phy_type_low = hw->port_info->phy.phy_type_low;
 	phy_type_high = hw->port_info->phy.phy_type_high;
 
 	if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 
 	if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
 			ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
 	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3603,8 +3603,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		status = ice_aq_get_link_info(hw->port_info, enable_lse,
 					      &link_status, NULL);
 		if (status != ICE_SUCCESS) {
-			link.link_speed = ETH_SPEED_NUM_100M;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_100M;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			goto out;
 		}
@@ -3620,55 +3620,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		goto out;
 
 	/* Full-duplex operation at all supported speeds */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case ICE_AQ_LINK_SPEED_10MB:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case ICE_AQ_LINK_SPEED_100MB:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case ICE_AQ_LINK_SPEED_1000MB:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case ICE_AQ_LINK_SPEED_2500MB:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_5GB:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_10GB:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case ICE_AQ_LINK_SPEED_20GB:
-		link.link_speed = ETH_SPEED_NUM_20G;
+		link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case ICE_AQ_LINK_SPEED_25GB:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case ICE_AQ_LINK_SPEED_40GB:
-		link.link_speed = ETH_SPEED_NUM_40G;
+		link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case ICE_AQ_LINK_SPEED_50GB:
-		link.link_speed = ETH_SPEED_NUM_50G;
+		link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case ICE_AQ_LINK_SPEED_100GB:
-		link.link_speed = ETH_SPEED_NUM_100G;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case ICE_AQ_LINK_SPEED_UNKNOWN:
 		PMD_DRV_LOG(ERR, "Unknown link speed");
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "None link speed");
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 
 out:
 	ice_atomic_write_link_status(dev, &link);
@@ -3767,10 +3767,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > ICE_ETH_MAX_LEN)
 		dev_data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev_data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
@@ -4161,15 +4161,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ice_vsi_config_vlan_filter(vsi, true);
 		else
 			ice_vsi_config_vlan_filter(vsi, false);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ice_vsi_config_vlan_stripping(vsi, true);
 		else
 			ice_vsi_config_vlan_stripping(vsi, false);
@@ -5244,7 +5244,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
 		break;
 	default:
@@ -5268,7 +5268,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index b4bf651c1c7f..1c4bc4e30349 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -115,19 +115,19 @@
 		       ICE_FLAG_VF_MAC_BY_PF)
 
 #define ICE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /**
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 54d14dfcddfb..beb863f70568 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
 #define ICE_IPV4_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
 #define ICE_IPV6_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE32	| \
 				 RTE_ETH_RSS_L3_PRE48	| \
 				 RTE_ETH_RSS_L3_PRE64)
@@ -373,80 +373,80 @@ struct ice_rss_hash_cfg eth_tmplt = {
 };
 
 /* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4		(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4)
+#define ICE_RSS_TYPE_ETH_IPV4		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define ICE_RSS_TYPE_ETH_IPV4_UDP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 #define ICE_RSS_TYPE_ETH_IPV4_TCP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 #define ICE_RSS_TYPE_ETH_IPV4_SCTP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
-#define ICE_RSS_TYPE_IPV4		ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
+#define ICE_RSS_TYPE_IPV4		RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 /* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6		(ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(ETH_RSS_ETH | ETH_RSS_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_ETH_IPV6_UDP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 #define ICE_RSS_TYPE_ETH_IPV6_TCP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 #define ICE_RSS_TYPE_ETH_IPV6_SCTP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
-#define ICE_RSS_TYPE_IPV6		ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ICE_RSS_TYPE_IPV6		RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* VLAN IPV4 */
 #define ICE_RSS_TYPE_VLAN_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV4)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define ICE_RSS_TYPE_VLAN_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_SCTP	(ICE_RSS_TYPE_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define ICE_RSS_TYPE_VLAN_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_FRAG	(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_VLAN_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_SCTP	(ICE_RSS_TYPE_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 
 /* GTPU IPv4 */
 #define ICE_RSS_TYPE_GTPU_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define ICE_RSS_TYPE_GTPU_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 
 /* PPPOE */
-#define ICE_RSS_TYPE_PPPOE		(ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
 
 /* PPPOE IPv4 */
 #define ICE_RSS_TYPE_PPPOE_IPV4		(ICE_RSS_TYPE_IPV4 | \
@@ -465,17 +465,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
 					 ICE_RSS_TYPE_PPPOE)
 
 /* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /* MAC */
-#define ICE_RSS_TYPE_ETH		ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH		RTE_ETH_RSS_ETH
 
 /**
  * Supported pattern for hash.
@@ -640,51 +640,51 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
-		if (!(rss_type & ETH_RSS_ETH))
+		if (!(rss_type & RTE_ETH_RSS_ETH))
 			*hash_flds &= ~ICE_FLOW_HASH_ETH;
-		if (rss_type & ETH_RSS_L2_SRC_ONLY)
+		if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
-		else if (rss_type & ETH_RSS_L2_DST_ONLY)
+		else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
 		*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
-		if (rss_type & ETH_RSS_ETH)
+		if (rss_type & RTE_ETH_RSS_ETH)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
-		if (rss_type & ETH_RSS_C_VLAN)
+		if (rss_type & RTE_ETH_RSS_C_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
-		else if (rss_type & ETH_RSS_S_VLAN)
+		else if (rss_type & RTE_ETH_RSS_S_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
-		if (!(rss_type & ETH_RSS_PPPOE))
+		if (!(rss_type & RTE_ETH_RSS_PPPOE))
 			*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
 		if (rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-		    ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV4) {
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 				*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
 				*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
 			}
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV4;
@@ -693,30 +693,30 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
 		if (rss_type &
-		   (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+		   (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		}
 
 		if (rss_type & RTE_ETH_RSS_L3_PRE32) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
 			} else {
@@ -725,10 +725,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE48) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
 			} else {
@@ -737,10 +737,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE64) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
 			} else {
@@ -752,15 +752,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV6_UDP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
@@ -769,15 +769,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV6_TCP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
@@ -786,15 +786,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_SCTP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
@@ -802,22 +802,22 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
-		if (!(rss_type & ETH_RSS_L2TPV3))
+		if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 			*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
-		if (!(rss_type & ETH_RSS_ESP))
+		if (!(rss_type & RTE_ETH_RSS_ESP))
 			*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
-		if (!(rss_type & ETH_RSS_AH))
+		if (!(rss_type & RTE_ETH_RSS_AH))
 			*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
-		if (!(rss_type & ETH_RSS_PFCP))
+		if (!(rss_type & RTE_ETH_RSS_PFCP))
 			*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
 	}
 }
@@ -851,7 +851,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -873,10 +873,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -888,9 +888,9 @@ struct rss_attr_type {
 };
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE32,				VALID_RSS_IPV6},
 	{RTE_ETH_RSS_L3_PRE48,				VALID_RSS_IPV6},
@@ -909,16 +909,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047ee..63c07e001f07 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -280,7 +280,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 				   ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
 				   dev_data->dev_conf.rxmode.max_rx_pkt_len);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
 		    rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must "
@@ -1103,7 +1103,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
 
 	rxq->reg_idx = vsi->base_queue + queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2780,7 +2780,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
 	for (i = 0; i < txq->tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
 			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
 			txep->mbuf = NULL;
@@ -3254,7 +3254,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		(txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		(txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
 
 	if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac018043..8c870354619e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -473,7 +473,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d296..6d2038975830 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -584,7 +584,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -994,7 +994,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 2d8ef7dc8a93..a5b573c22da2 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
 }
 
 #define ICE_TX_NO_VECTOR_FLAGS (			\
-		DEV_TX_OFFLOAD_MULTI_SEGS |		\
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define ICE_TX_VECTOR_OFFLOAD (				\
-		DEV_TX_OFFLOAD_VLAN_INSERT |		\
-		DEV_TX_OFFLOAD_QINQ_INSERT |		\
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		\
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_TX_OFFLOAD_UDP_CKSUM |		\
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define ICE_RX_VECTOR_OFFLOAD (				\
-		DEV_RX_OFFLOAD_CHECKSUM |		\
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_RX_OFFLOAD_VLAN |			\
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		\
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_RX_OFFLOAD_VLAN |			\
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define ICE_VECTOR_PATH		0
 #define ICE_VECTOR_OFFLOAD_PATH	1
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH) {
+				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a0954836b..d8f5a786efac 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -314,8 +314,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		rx_mq_mode != ETH_MQ_RX_RSS) {
+	if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 		/* RSS together with VMDq not supported*/
 		PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				rx_mq_mode);
@@ -325,7 +325,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 	/* To no break software that set invalid mode, only display
 	 * warning if invalid mode is used.
 	 */
-	if (tx_mq_mode != ETH_MQ_TX_NONE)
+	if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
 		PMD_INIT_LOG(WARNING,
 			"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
 			tx_mq_mode);
@@ -341,8 +341,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	ret  = igc_check_mq_mode(dev);
 	if (ret != 0)
@@ -480,12 +480,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 		if (speed == SPEED_2500) {
 			uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -497,9 +497,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		}
 	} else {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -532,7 +532,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
 				" Port %d: Link Up - speed %u Mbps - %s",
 				dev->data->port_id,
 				(unsigned int)link.link_speed,
-				link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				"full-duplex" : "half-duplex");
 		else
 			PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -979,18 +979,18 @@ eth_igc_start(struct rte_eth_dev *dev)
 
 	/* VLAN Offload Settings */
 	eth_igc_vlan_offload_set(dev,
-		ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+		RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
 		hw->mac.autoneg = 1;
 	} else {
 		int num_speeds = 0;
 
-		if (*speeds & ETH_LINK_SPEED_FIXED) {
+		if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
 			PMD_DRV_LOG(ERR,
 				    "Force speed mode currently not supported");
 			igc_dev_clear_queues(dev);
@@ -1000,33 +1000,33 @@ eth_igc_start(struct rte_eth_dev *dev)
 		hw->phy.autoneg_advertised = 0;
 		hw->mac.autoneg = 1;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_2_5G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
 			num_speeds++;
 		}
@@ -1490,14 +1490,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
 	dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
 	dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
-	dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_vmdq_pools = 0;
 
 	dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1523,9 +1523,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -1603,11 +1603,11 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* switch to jumbo mode if needed */
 	if (mtu > RTE_ETHER_MTU) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl |= IGC_RCTL_LPE;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl &= ~IGC_RCTL_LPE;
 	}
 	IGC_WRITE_REG(hw, IGC_RCTL, rctl);
@@ -2165,13 +2165,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2203,16 +2203,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->fc.requested_mode = igc_fc_none;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->fc.requested_mode = igc_fc_rx_pause;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->fc.requested_mode = igc_fc_tx_pause;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->fc.requested_mode = igc_fc_full;
 		break;
 	default:
@@ -2258,17 +2258,17 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* set redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta, reg;
 		uint16_t idx, shift;
 		uint8_t j, mask;
@@ -2314,17 +2314,17 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* read redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta;
 		uint16_t idx, shift;
 		uint8_t j, mask;
@@ -2393,23 +2393,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_hf = 0;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 
 	rss_conf->rss_hf |= rss_hf;
 	return 0;
@@ -2495,7 +2495,7 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
 		return 0;
 
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0)
 		goto write_ext_vlan;
 
 	/* Update maximum packet length */
@@ -2528,7 +2528,7 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
 		return 0;
 
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0)
 		goto write_ext_vlan;
 
 	/* Update maximum packet length */
@@ -2554,22 +2554,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igc_vlan_hw_strip_enable(dev);
 		else
 			igc_vlan_hw_strip_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igc_vlan_hw_filter_enable(dev);
 		else
 			igc_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			return igc_vlan_hw_extend_enable(dev);
 		else
 			return igc_vlan_hw_extend_disable(dev);
@@ -2587,7 +2587,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
 	uint32_t reg_val;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg_val = IGC_READ_REG(hw, IGC_VET);
 		reg_val = (reg_val & (~IGC_VET_EXT)) |
 			((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..066792b8a2d8 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -59,38 +59,38 @@ extern "C" {
 #define IGC_TX_MAX_MTU_SEG	UINT8_MAX
 
 #define IGC_RX_OFFLOAD_ALL	(    \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_VLAN_FILTER | \
-	DEV_RX_OFFLOAD_VLAN_EXTEND | \
-	DEV_RX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_RX_OFFLOAD_UDP_CKSUM   | \
-	DEV_RX_OFFLOAD_TCP_CKSUM   | \
-	DEV_RX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_RX_OFFLOAD_JUMBO_FRAME | \
-	DEV_RX_OFFLOAD_KEEP_CRC    | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+	RTE_ETH_RX_OFFLOAD_KEEP_CRC    | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IGC_TX_OFFLOAD_ALL	(    \
-	DEV_TX_OFFLOAD_VLAN_INSERT | \
-	DEV_TX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_TX_OFFLOAD_UDP_CKSUM   | \
-	DEV_TX_OFFLOAD_TCP_CKSUM   | \
-	DEV_TX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_TX_OFFLOAD_TCP_TSO     | \
-	DEV_TX_OFFLOAD_UDP_TSO	   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO     | \
+	RTE_ETH_TX_OFFLOAD_UDP_TSO	   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define IGC_RSS_OFFLOAD_ALL	(    \
-	ETH_RSS_IPV4               | \
-	ETH_RSS_NONFRAG_IPV4_TCP   | \
-	ETH_RSS_NONFRAG_IPV4_UDP   | \
-	ETH_RSS_IPV6               | \
-	ETH_RSS_NONFRAG_IPV6_TCP   | \
-	ETH_RSS_NONFRAG_IPV6_UDP   | \
-	ETH_RSS_IPV6_EX            | \
-	ETH_RSS_IPV6_TCP_EX        | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4               | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP   | \
+	RTE_ETH_RSS_IPV6               | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP   | \
+	RTE_ETH_RSS_IPV6_EX            | \
+	RTE_ETH_RSS_IPV6_TCP_EX        | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IGC_MAX_ETQF_FILTERS		3	/* etqf(3) is used for 1588 */
 #define IGC_ETQF_FILTER_1588		3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd220..82e7e084b41d 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;	/**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 static inline uint64_t
@@ -866,23 +866,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
 }
@@ -1056,10 +1056,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		igc_rss_configure(dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/*
 		 * configure RSS register for following,
 		 * then disable the RSS logic
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 	IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
 
 	/* Configure support of jumbo frames, if any. */
-	if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		rctl |= IGC_RCTL_LPE;
 
 		/*
@@ -1130,7 +1130,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure
 		 */
-		rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+		rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				RTE_ETHER_CRC_LEN : 0;
 
 		bus_addr = rxq->rx_ring_phys_addr;
@@ -1196,7 +1196,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	if (dev->data->scattered_rx) {
@@ -1240,20 +1240,20 @@ igc_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= IGC_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= IGC_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_IPOFL;
 
 	if (offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rxcsum |= IGC_RXCSUM_TUOFL;
-		offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
 	} else {
 		rxcsum &= ~IGC_RXCSUM_TUOFL;
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
 		rxcsum |= IGC_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1261,7 +1261,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 	IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1298,12 +1298,12 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
 
 		dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			dvmolr |= IGC_DVMOLR_STRVLAN;
 		else
 			dvmolr &= ~IGC_DVMOLR_STRVLAN;
 
-		if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			dvmolr &= ~IGC_DVMOLR_STRCRC;
 		else
 			dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2272,10 +2272,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
 	reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
 	if (on) {
 		reg_val |= IGC_DVMOLR_STRVLAN;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e6207939665e..824341fee3f6 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(link));
 
 	if (adapter->idev.port_info->config.an_enable) {
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	}
 
 	if (!adapter->link_up ||
 	    !(lif->state & IONIC_LIF_F_UP)) {
 		/* Interface is down */
-		link.link_status = ETH_LINK_DOWN;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else {
 		/* Interface is up */
-		link.link_status = ETH_LINK_UP;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		switch (adapter->link_speed) {
 		case  10000:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case  25000:
-			link.link_speed = ETH_SPEED_NUM_25G;
+			link.link_speed = RTE_ETH_SPEED_NUM_25G;
 			break;
 		case  40000:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case  50000:
-			link.link_speed = ETH_SPEED_NUM_50G;
+			link.link_speed = RTE_ETH_SPEED_NUM_50G;
 			break;
 		case 100000:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -397,17 +397,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
 
 	dev_info->speed_capa =
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	/*
 	 * Per-queue capabilities
 	 * RTE does not support disabling a feature on a queue if it is
 	 * enabled globally on the device. Thus the driver does not advertise
-	 * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+	 * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
 	 * though the driver would be otherwise capable of disabling it on
 	 * a per-queue basis.
 	 */
@@ -421,25 +421,25 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	 */
 
 	dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
 		0;
 
 	dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
 		0;
 
 	dev_info->rx_desc_lim = rx_desc_lim;
@@ -474,9 +474,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		fc_conf->autoneg = 0;
 
 		if (idev->port_info->config.pause_type)
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -498,14 +498,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
 		break;
-	case RTE_FC_RX_PAUSE:
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		return -ENOTSUP;
 	}
 
@@ -629,17 +629,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 			IONIC_RSS_HASH_KEY_SIZE);
 
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -671,17 +671,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (!lif->rss_ind_tbl)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 			rss_types |= IONIC_RSS_TYPE_IPV4;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
-		if (rss_conf->rss_hf & ETH_RSS_IPV6)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 			rss_types |= IONIC_RSS_TYPE_IPV6;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
 
 		ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -853,15 +853,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
 static inline uint32_t
 ionic_parse_link_speeds(uint16_t link_speeds)
 {
-	if (link_speeds & ETH_LINK_SPEED_100G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 		return 100000;
-	else if (link_speeds & ETH_LINK_SPEED_50G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 		return 50000;
-	else if (link_speeds & ETH_LINK_SPEED_40G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		return 40000;
-	else if (link_speeds & ETH_LINK_SPEED_25G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		return 25000;
-	else if (link_speeds & ETH_LINK_SPEED_10G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		return 10000;
 	else
 		return 0;
@@ -885,12 +885,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	IONIC_PRINT_CALL();
 
 	allowed_speeds =
-		ETH_LINK_SPEED_FIXED |
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_FIXED |
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	if (dev_conf->link_speeds & ~allowed_speeds) {
 		IONIC_PRINT(ERR, "Invalid link setting");
@@ -907,7 +907,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure link */
-	an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+	an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 	ionic_dev_cmd_port_autoneg(idev, an_enable);
 	err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
 #include <rte_ethdev.h>
 
 #define IONIC_ETH_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
 	(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index 431eda777b78..d4eb6c1d78be 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
 
 	/*
 	 * IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
-	 * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+	 * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
 	 */
-	rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
 		else
 			lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
 	/*
 	 * NB: While it is true that RSS_HASH is always enabled on ionic,
 	 *     setting this flag unconditionally causes problems in DTS.
-	 * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	 * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	 */
 
 	/* RX per-port */
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 		lif->features |= IONIC_ETH_HW_RX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_RX_CSUM;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		lif->features |= IONIC_ETH_HW_RX_SG;
 		lif->eth_dev->data->scattered_rx = 1;
 	} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
 	}
 
 	/* Covers VLAN_STRIP */
-	ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+	ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
 
 	/* TX per-port */
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		lif->features |= IONIC_ETH_HW_TX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_CSUM;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
 	else
 		lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		lif->features |= IONIC_ETH_HW_TX_SG;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_SG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		lif->features |= IONIC_ETH_HW_TSO;
 		lif->features |= IONIC_ETH_HW_TSO_IPV6;
 		lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa6a..0c1f6113d0e9 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -204,11 +204,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
 		txq->flags |= IONIC_QCQ_F_DEFERRED;
 
 	/* Convert the offload flags into queue flags */
-	if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_L3;
-	if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_TCP;
-	if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_UDP;
 
 	eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -745,11 +745,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 
 	/*
 	 * Note: the interface does not currently support
-	 * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+	 * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
 	 * when the adapter will be able to keep the CRC and subtract
 	 * it to the length for all received packets:
 	 * if (eth_dev->data->dev_conf.rxmode.offloads &
-	 *     DEV_RX_OFFLOAD_KEEP_CRC)
+	 *     RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 	 *   rxq->crc_len = ETHER_CRC_LEN;
 	 */
 
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa5877d..2f6df2c2f6b8 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->speed_capa =
 		(hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
-		ETH_LINK_SPEED_10G :
+		RTE_ETH_LINK_SPEED_10G :
 		((hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
-		ETH_LINK_SPEED_25G :
-		ETH_LINK_SPEED_AUTONEG);
+		RTE_ETH_LINK_SPEED_25G :
+		RTE_ETH_LINK_SPEED_AUTONEG);
 
 	dev_info->max_rx_queues  = 1;
 	dev_info->max_tx_queues  = 1;
@@ -67,31 +67,31 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	};
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 
 	dev_info->dev_capa =
@@ -2410,10 +2410,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
 				(uint64_t *)&link_speed);
 	switch (link_speed) {
 	case IFPGA_RAWDEV_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case IFPGA_RAWDEV_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
 		IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2471,9 +2471,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2529,9 +2529,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2803,10 +2803,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
 
 	if (frame_size > IPN3KE_ETH_MAX_LEN)
 		dev_data->dev_conf.rxmode.offloads |=
-			(uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+			(uint64_t)(RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
 	else
 		dev_data->dev_conf.rxmode.offloads &=
-			(uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
+			(uint64_t)(~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
 
 	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b5371568b54d..3707daf4760f 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1865,7 +1865,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= IXGBE_DMATXCTL_GDV;
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (qinq) {
 			reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1880,7 +1880,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    " by single VLAN");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (qinq) {
 			/* Only the high 16-bits is valid */
 			IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1967,10 +1967,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -2091,7 +2091,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	if (hw->mac.type == ixgbe_mac_82598EB) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			ctrl |= IXGBE_VLNCTRL_VME;
 			IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2108,7 +2108,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
-			if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 				ctrl |= IXGBE_RXDCTL_VME;
 				on = TRUE;
 			} else {
@@ -2130,17 +2130,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct ixgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -2151,19 +2151,19 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		ixgbe_vlan_hw_strip_config(dev);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ixgbe_vlan_hw_filter_enable(dev);
 		else
 			ixgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			ixgbe_vlan_hw_extend_enable(dev);
 		else
 			ixgbe_vlan_hw_extend_disable(dev);
@@ -2202,10 +2202,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -2229,18 +2229,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2250,12 +2250,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -2264,12 +2264,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -2284,13 +2284,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2299,15 +2299,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2316,39 +2316,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -2357,7 +2357,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB) {
 			if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
 				PMD_INIT_LOG(ERR,
@@ -2381,8 +2381,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = ixgbe_check_mq_mode(dev);
@@ -2627,15 +2627,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		ixgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -2712,17 +2712,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |  ETH_LINK_SPEED_5G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |  RTE_ETH_LINK_SPEED_5G |
+			RTE_ETH_LINK_SPEED_10G;
 		if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 				hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-			allowed_speeds = ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+			allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 		break;
 	default:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 	}
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2736,7 +2736,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		switch (hw->mac.type) {
 		case ixgbe_mac_82598EB:
 			speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2754,17 +2754,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 			speed = IXGBE_LINK_SPEED_82599_AUTONEG;
 		}
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= IXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= IXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= IXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= IXGBE_LINK_SPEED_100_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= IXGBE_LINK_SPEED_10_FULL;
 	}
 
@@ -3839,7 +3839,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB)
 			dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
 	}
@@ -3849,9 +3849,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->max_mtu =  dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3890,21 +3890,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
 	dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 			hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-		dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 
 	if (hw->mac.type == ixgbe_mac_X540 ||
 	    hw->mac.type == ixgbe_mac_X540_vf ||
 	    hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550_vf) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	}
 	if (hw->mac.type == ixgbe_mac_X550) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-		dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 	}
 
 	/* Driver-preferred Rx/Tx parameters */
@@ -3973,9 +3973,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
@@ -4218,11 +4218,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	u32 esdp_reg;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	hw->mac.get_link_status = true;
 
@@ -4244,8 +4244,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
 
 	if (diag != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -4281,37 +4281,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case IXGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 
 	case IXGBE_LINK_SPEED_10_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 
 	case IXGBE_LINK_SPEED_100_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case IXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case IXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -4528,7 +4528,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4747,13 +4747,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -5199,11 +5199,11 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* switch to jumbo mode if needed */
 	if (frame_size > IXGBE_ETH_MAX_LEN) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		hlreg0 |= IXGBE_HLREG0_JUMBOEN;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
 	}
 	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
@@ -5271,22 +5271,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -5346,8 +5346,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
 	ixgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5581,10 +5581,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			ixgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
@@ -5715,12 +5715,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
 		}
@@ -5734,15 +5734,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= IXGBE_VMOLR_AUPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= IXGBE_VMOLR_ROMPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= IXGBE_VMOLR_ROPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= IXGBE_VMOLR_BAM;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= IXGBE_VMOLR_MPE;
 
 	return new_val;
@@ -5753,8 +5753,8 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 #define IXGBE_MRCTL_DPME  0x04 /* Downlink Port Mirroring. */
 #define IXGBE_MRCTL_VLME  0x08 /* VLAN Mirroring. */
 #define IXGBE_INVALID_MIRROR_TYPE(mirror_type) \
-	((mirror_type) & ~(uint8_t)(ETH_MIRROR_VIRTUAL_POOL_UP | \
-	ETH_MIRROR_UPLINK_PORT | ETH_MIRROR_DOWNLINK_PORT | ETH_MIRROR_VLAN))
+	((mirror_type) & ~(uint8_t)(RTE_ETH_MIRROR_VIRTUAL_POOL_UP | \
+	RTE_ETH_MIRROR_UPLINK_PORT | RTE_ETH_MIRROR_DOWNLINK_PORT | RTE_ETH_MIRROR_VLAN))
 
 static int
 ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
@@ -5794,7 +5794,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	if (mirror_conf->rule_type & ETH_MIRROR_VLAN) {
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) {
 		mirror_type |= IXGBE_MRCTL_VLME;
 		/* Check if vlan id is valid and find conresponding VLAN ID
 		 * index in VLVF
@@ -5827,7 +5827,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 
 			mr_info->mr_conf[rule_id].vlan.vlan_mask =
 						mirror_conf->vlan.vlan_mask;
-			for (i = 0; i < ETH_VMDQ_MAX_VLAN_FILTERS; i++) {
+			for (i = 0; i < RTE_ETH_VMDQ_MAX_VLAN_FILTERS; i++) {
 				if (mirror_conf->vlan.vlan_mask & (1ULL << i))
 					mr_info->mr_conf[rule_id].vlan.vlan_id[i] =
 						mirror_conf->vlan.vlan_id[i];
@@ -5836,7 +5836,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 			mv_lsb = 0;
 			mv_msb = 0;
 			mr_info->mr_conf[rule_id].vlan.vlan_mask = 0;
-			for (i = 0; i < ETH_VMDQ_MAX_VLAN_FILTERS; i++)
+			for (i = 0; i < RTE_ETH_VMDQ_MAX_VLAN_FILTERS; i++)
 				mr_info->mr_conf[rule_id].vlan.vlan_id[i] = 0;
 		}
 	}
@@ -5845,7 +5845,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 	 * if enable pool mirror, write related pool mask register,if disable
 	 * pool mirror, clear PFMRVM register
 	 */
-	if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) {
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_VIRTUAL_POOL_UP) {
 		mirror_type |= IXGBE_MRCTL_VPME;
 		if (on) {
 			mp_lsb = mirror_conf->pool_mask & 0xFFFFFFFF;
@@ -5859,9 +5859,9 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 			mr_info->mr_conf[rule_id].pool_mask = 0;
 		}
 	}
-	if (mirror_conf->rule_type & ETH_MIRROR_UPLINK_PORT)
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_UPLINK_PORT)
 		mirror_type |= IXGBE_MRCTL_UPME;
-	if (mirror_conf->rule_type & ETH_MIRROR_DOWNLINK_PORT)
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_DOWNLINK_PORT)
 		mirror_type |= IXGBE_MRCTL_DPME;
 
 	/* read  mirror control register and recalculate it */
@@ -5882,13 +5882,13 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 	IXGBE_WRITE_REG(hw, IXGBE_MRCTL(rule_id), mr_ctl);
 
 	/* write pool mirrror control  register */
-	if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) {
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_VIRTUAL_POOL_UP) {
 		IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id), mp_lsb);
 		IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id + rule_mr_offset),
 				mp_msb);
 	}
 	/* write VLAN mirrror control  register */
-	if (mirror_conf->rule_type & ETH_MIRROR_VLAN) {
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) {
 		IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id), mv_lsb);
 		IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id + rule_mr_offset),
 				mv_msb);
@@ -6266,7 +6266,7 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
 	 * register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
 	 * set as 0x4.
 	 */
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) &&
 	    (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
 		IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
 			IXGBE_MMW_SIZE_JUMBO_FRAME);
@@ -6942,15 +6942,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = IXGBE_INCVAL_100;
 		shift = IXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = IXGBE_INCVAL_1GB;
 		shift = IXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = IXGBE_INCVAL_10GB;
 		shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7361,16 +7361,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		return ETH_RSS_RETA_SIZE_512;
+		return RTE_ETH_RSS_RETA_SIZE_512;
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
-		return ETH_RSS_RETA_SIZE_64;
+		return RTE_ETH_RSS_RETA_SIZE_64;
 	case ixgbe_mac_X540_vf:
 	case ixgbe_mac_82599_vf:
 		return 0;
 	default:
-		return ETH_RSS_RETA_SIZE_128;
+		return RTE_ETH_RSS_RETA_SIZE_128;
 	}
 }
 
@@ -7380,10 +7380,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		if (reta_idx < ETH_RSS_RETA_SIZE_128)
+		if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
 			return IXGBE_RETA(reta_idx >> 2);
 		else
-			return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+			return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
@@ -7439,7 +7439,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -7450,7 +7450,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled*/
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -7474,9 +7474,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled*/
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7489,7 +7489,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7742,7 +7742,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -7774,7 +7774,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -7871,12 +7871,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
 
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
@@ -7908,11 +7908,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca246b..3443154589e8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -113,15 +113,15 @@
 #define IXGBE_FDIR_NVGRE_TUNNEL_TYPE    0x0
 
 #define IXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IXGBE_VF_IRQ_ENABLE_MASK        3          /* vf irq enable mask */
 #define IXGBE_VF_MAXMSIVECTOR           1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
 static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 				 uint32_t key);
 static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
 			union ixgbe_atr_input *input, uint8_t queue,
 			uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
 {
 	*fdirctrl = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
 		break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 511b612f7fe4..0557de6c1aa5 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
 	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
 		if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
 				IXGBE_SECTXCTRL_STORE_FORWARD);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index fbf2b17d160f..d03238b728ba 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -107,15 +107,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -266,15 +266,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
 		gpie |= IXGBE_GPIE_VTMODE_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
 		gpie |= IXGBE_GPIE_VTMODE_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
 		gpie |= IXGBE_GPIE_VTMODE_16;
 		break;
@@ -604,11 +604,11 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 		hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
 		if (max_frame > IXGBE_ETH_MAX_LEN) {
 			dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_JUMBO_FRAME;
+				RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			hlreg0 |= IXGBE_HLREG0_JUMBOEN;
 		} else {
 			dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+				~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
 		}
 		IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
@@ -684,29 +684,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 		vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
 		vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index c814a28cb49a..b5ee83d8edc8 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2591,26 +2591,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2778,7 +2778,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/*
@@ -3014,7 +3014,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (hw->mac.type != ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return offloads;
 }
@@ -3025,20 +3025,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t offloads;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_JUMBO_FRAME |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_SCATTER |
-		   DEV_RX_OFFLOAD_RSS_HASH;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_SCATTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (ixgbe_is_vf(dev) == 0)
-		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	/*
 	 * RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3048,20 +3048,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	     hw->mac.type == ixgbe_mac_X540 ||
 	     hw->mac.type == ixgbe_mac_X550) &&
 	    !RTE_ETH_DEV_SRIOV(dev).active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -3116,7 +3116,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -3520,23 +3520,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
 	IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
 }
@@ -3618,23 +3618,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -3710,12 +3710,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		ixgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * RXPBSIZE
@@ -3740,7 +3740,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
 
 		rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3749,7 +3749,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	}
 
 	/* MRQC: enable vmdq and dcb */
-	mrqc = (num_pools == ETH_16_POOLS) ?
+	mrqc = (num_pools == RTE_ETH_16_POOLS) ?
 		IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
 	IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
 
@@ -3765,7 +3765,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3789,7 +3789,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 16 or 32 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*
 	 * MPSAR - allow pools to read specific mac addresses
@@ -3871,7 +3871,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	if (hw->mac.type != ixgbe_mac_82598EB)
 		/*PF VF Transmit Enable*/
 		IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
-			vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3887,12 +3887,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3902,7 +3902,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3920,12 +3920,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3935,7 +3935,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3962,7 +3962,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3989,7 +3989,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4158,7 +4158,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		if (hw->mac.type != ixgbe_mac_82598EB) {
 			config_dcb_rx = DCB_RX_CONFIG;
@@ -4171,8 +4171,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			ixgbe_vmdq_dcb_configure(dev);
 		}
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4185,7 +4185,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -4196,7 +4196,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4212,15 +4212,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
-			if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+			if ((mask & 0x1) && (j < RTE_ETH_DCB_NUM_USER_PRIORITIES))
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -4270,7 +4270,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 		}
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
 		}
 	}
@@ -4286,7 +4286,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
 		}
@@ -4322,7 +4322,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/*
@@ -4336,7 +4336,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = ixgbe_dcb_pfc_enabled;
 		}
 		ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -4357,12 +4357,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+	if ((dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB) &&
+	    (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB) &&
+	    (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS))
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -4418,7 +4418,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 64 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
 
 	/*
@@ -4539,11 +4539,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
 	mrqc &= ~IXGBE_MRQC_MRQE_MASK;
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS64EN;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS32EN;
 		break;
 
@@ -4564,17 +4564,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQEN);
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT4TCEN);
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT8TCEN);
 		break;
@@ -4601,21 +4601,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			ixgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			ixgbe_rss_disable(dev);
@@ -4626,18 +4626,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4671,7 +4671,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			ixgbe_vmdq_tx_hw_configure(hw);
 		else {
 			mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4684,13 +4684,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
 				IXGBE_MTQC_8TC_8TQ;
 			break;
@@ -4898,7 +4898,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
 		rxq->rx_using_sse = rx_using_sse;
 #ifdef RTE_LIB_SECURITY
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 #endif
 	}
 }
@@ -4926,10 +4926,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4937,8 +4937,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		/*
 		 * According to chapter of 4.6.7.2.1 of the Spec Rev.
 		 * 3.0 RSC configuration requires HW CRC stripping being
@@ -4952,7 +4952,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RFCTL configuration  */
 	rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
-	if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~IXGBE_RFCTL_RSC_DIS;
 	else
 		rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4961,7 +4961,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 	IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set RDRXCTL.RSCACKC bit */
@@ -5082,7 +5082,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
 	else
 		hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5090,7 +5090,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	/*
 	 * Configure jumbo frame support, if any.
 	 */
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		hlreg0 |= IXGBE_HLREG0_JUMBOEN;
 		maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
 		maxfrs &= 0x0000FFFF;
@@ -5119,7 +5119,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5128,7 +5128,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -5171,11 +5171,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
 					    2 * IXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -5190,7 +5190,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
 	rxcsum |= IXGBE_RXCSUM_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= IXGBE_RXCSUM_IPPCSE;
 	else
 		rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5200,7 +5200,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540) {
 		rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5406,9 +5406,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 
 #ifdef RTE_LIB_SECURITY
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SECURITY) ||
+			RTE_ETH_RX_OFFLOAD_SECURITY) ||
 		(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY)) {
+			RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = ixgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -5696,7 +5696,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5745,7 +5745,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
 				       IXGBE_SRRCTL_BSIZEPKT_SHIFT);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (rxmode->max_rx_pkt_len +
 				2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -5754,8 +5754,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 476ef62cfda2..220efffe4d08 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
 	uint8_t             rx_udp_csum_zero_err;
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -226,7 +226,7 @@ struct ixgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index adba855ca30f..714707941537 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -278,7 +278,7 @@ static inline int
 ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 	/* no fdir support */
 	if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index a8407e742e6d..c2ab3131f22e 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a19408..536e33010703 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	/**< Maximum number of MAC addresses. */
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |	DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 	/**< Device RX offload capabilities. */
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	/**< Device TX offload capabilities. */
 
 	dev_info->speed_capa =
 		representor->pf_ethdev->data->dev_link.link_speed;
-	/**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	/**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 
 	dev_info->switch_info.name =
 		representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	 */
 	if (hw->mac.type == ixgbe_mac_82598EB)
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_16_POOLS;
+				  RTE_ETH_16_POOLS;
 	else
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_64_POOLS;
+				  RTE_ETH_64_POOLS;
 
 	for (q = 0; q < queues_per_pool; q++)
 		(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 	bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 	eth_conf = &dev->data->dev_conf;
 
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..bad6691648a1 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 * @param rx_mask
 *    The RX mode mask, which is one or more of accepting Untagged Packets,
 *    packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-*    ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-*    ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+*    RTE_ETH_VMDQ_ACCEPT_UNTAG,RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+*    RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
 *    in rx_mode.
 * @param on
 *    1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index 871d11c4133d..29060ca76f93 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
 };
 
 static const struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 static int is_kni_initialized;
 
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index b72060a4499b..118170670fbb 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
 	case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		break;
 	/* CN23xx 25G cards */
 	case PCI_SUBSYS_DEV_ID_CN2350_225:
 	case PCI_SUBSYS_DEV_ID_CN2360_225:
-		devinfo->speed_capa = ETH_LINK_SPEED_25G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		break;
 	default:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		lio_dev_err(lio_dev,
 			    "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
 		return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	devinfo->max_mac_addrs = 1;
 
-	devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_RX_OFFLOAD_UDP_CKSUM		|
-				    DEV_RX_OFFLOAD_TCP_CKSUM		|
-				    DEV_RX_OFFLOAD_VLAN_STRIP		|
-				    DEV_RX_OFFLOAD_RSS_HASH);
-	devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_TX_OFFLOAD_UDP_CKSUM		|
-				    DEV_TX_OFFLOAD_TCP_CKSUM		|
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+	devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_VLAN_STRIP		|
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH);
+	devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
 
 	devinfo->rx_desc_lim = lio_rx_desc_lim;
 	devinfo->tx_desc_lim = lio_tx_desc_lim;
 
 	devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
 	devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
-	devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4			|
-					   ETH_RSS_NONFRAG_IPV4_TCP	|
-					   ETH_RSS_IPV6			|
-					   ETH_RSS_NONFRAG_IPV6_TCP	|
-					   ETH_RSS_IPV6_EX		|
-					   ETH_RSS_IPV6_TCP_EX);
+	devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4			|
+					   RTE_ETH_RSS_NONFRAG_IPV4_TCP	|
+					   RTE_ETH_RSS_IPV6			|
+					   RTE_ETH_RSS_NONFRAG_IPV6_TCP	|
+					   RTE_ETH_RSS_IPV6_EX		|
+					   RTE_ETH_RSS_IPV6_TCP_EX);
 	return 0;
 }
 
@@ -483,10 +483,10 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 
 	if (frame_len > LIO_ETH_MAX_LEN)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
 	eth_dev->data->mtu = mtu;
@@ -616,17 +616,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 		memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
 
 	if (rss_state->ip)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (rss_state->tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (rss_state->ipv6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (rss_state->ipv6_tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (rss_state->ipv6_ex)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (rss_state->ipv6_tcp_ex_hash)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -694,42 +694,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (rss_state->hash_disable)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 			hashinfo |= LIO_RSS_HASH_IPV4;
 			rss_state->ip = 1;
 		} else {
 			rss_state->ip = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV4;
 			rss_state->tcp_hash = 1;
 		} else {
 			rss_state->tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
 			hashinfo |= LIO_RSS_HASH_IPV6;
 			rss_state->ipv6 = 1;
 		} else {
 			rss_state->ipv6 = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6;
 			rss_state->ipv6_tcp_hash = 1;
 		} else {
 			rss_state->ipv6_tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
 			hashinfo |= LIO_RSS_HASH_IPV6_EX;
 			rss_state->ipv6_ex = 1;
 		} else {
 			rss_state->ipv6_ex = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
 			rss_state->ipv6_tcp_ex_hash = 1;
 		} else {
@@ -778,7 +778,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -835,7 +835,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -933,10 +933,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	/* Initialize */
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	/* Return what we found */
 	if (lio_dev->linfo.link.s.link_up == 0) {
@@ -944,18 +944,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 		return rte_eth_linkstatus_set(eth_dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP; /* Interface is up */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	switch (lio_dev->linfo.link.s.speed) {
 	case LIO_LINK_SPEED_10000:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case LIO_LINK_SPEED_25000:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	}
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1124,10 +1124,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rss_conf rss_conf;
 
 	switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		lio_dev_rss_configure(eth_dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 	/* if mq_mode is none, disable rss mode. */
 	default:
 		memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1509,7 +1509,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 1;
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -1530,11 +1530,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 0;
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
 		lio_dev->linfo.link.s.link_up = 1;
-		eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		lio_dev_err(lio_dev, "Unable to set Link Down\n");
 		return -1;
 	}
@@ -1746,9 +1746,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Inform firmware about change in number of queues to use.
 	 * Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index f58ff4c0cb77..a117a05228fc 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
 	int i;
 	int ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
 
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index de6becd45e3e..ea66f5bfd452 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 #define MEMIF_MP_SEND_REGION		"memif_mp_send_region"
@@ -1216,7 +1216,7 @@ memif_connect(struct rte_eth_dev *dev)
 
 		pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 		pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	}
 	MIF_LOG(INFO, "Connected.");
 	return 0;
@@ -1367,10 +1367,10 @@ memif_link_update(struct rte_eth_dev *dev,
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		proc_private = dev->process_private;
-		if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+		if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
 				proc_private->regions_num == 0) {
 			memif_mp_request_regions(dev);
-		} else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+		} else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
 				proc_private->regions_num > 0) {
 			memif_free_regions(dev);
 		}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->if_index = priv->if_index;
 	info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
 	info->speed_capa =
-			ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_20G |
-			ETH_LINK_SPEED_40G |
-			ETH_LINK_SPEED_56G;
+			RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_20G |
+			RTE_ETH_LINK_SPEED_40G |
+			RTE_ETH_LINK_SPEED_56G;
 	info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
 
 	return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		dev_link.link_speed = link_speed;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	dev->data->dev_link = dev_link;
 	return 0;
 }
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	ret = 0;
 out:
 	MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
 	};
 	static const uint64_t dpdk[] = {
 		[INNER] = 0,
-		[IPV4] = ETH_RSS_IPV4,
-		[IPV4_1] = ETH_RSS_FRAG_IPV4,
-		[IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IPV6] = ETH_RSS_IPV6,
-		[IPV6_1] = ETH_RSS_FRAG_IPV6,
-		[IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IPV6_3] = ETH_RSS_IPV6_EX,
+		[IPV4] = RTE_ETH_RSS_IPV4,
+		[IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+		[IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IPV6] = RTE_ETH_RSS_IPV6,
+		[IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+		[IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IPV6_3] = RTE_ETH_RSS_IPV6_EX,
 		[TCP] = 0,
 		[UDP] = 0,
-		[IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
-		[IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
-		[IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
-		[IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
-		[IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
-		[IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+		[IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+		[IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+		[IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+		[IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+		[IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+		[IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
 	};
 	static const uint64_t verbs[RTE_DIM(dpdk)] = {
 		[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
  * - MAC flow rules are generated from @p dev->data->mac_addrs
  *   (@p priv->mac array).
  * - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
  *   is enabled and VLAN filters are configured.
  *
  * @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	struct rte_ether_addr *rule_mac = &eth_spec.dst;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
-		 DEV_RX_OFFLOAD_VLAN_FILTER) &&
+		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
 		&vlan_spec.tci :
 		NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
 static void
 mlx4_link_status_alarm(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
 	};
 	uint32_t caught[RTE_DIM(type)] = { 0 };
 	struct ibv_async_event event;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	unsigned int i;
 
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
 int
 mlx4_intr_install(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	int rc;
 
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
 int
 mlx4_rxq_intr_enable(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 978cbb8201ea..9977c761880a 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,13 +682,13 @@ mlx4_rxq_detach(struct rxq *rxq)
 uint64_t
 mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
-			    DEV_RX_OFFLOAD_KEEP_CRC |
-			    DEV_RX_OFFLOAD_JUMBO_FRAME |
-			    DEV_RX_OFFLOAD_RSS_HASH;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+			    RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+			    RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (priv->hw_csum)
-		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return offloads;
 }
 
@@ -704,7 +704,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 uint64_t
 mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	(void)priv;
 	return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	/* By default, FCS (CRC) is stripped by hardware. */
 	crc_present = 0;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (priv->hw_fcs_strip) {
 			crc_present = 1;
 		} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts = elts,
 		/* Toggle Rx checksum offload if hardware supports it. */
 		.csum = priv->hw_csum &&
-			(offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.csum_l2tun = priv->hw_csum_l2tun &&
-			      (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			      (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.crc_present = crc_present,
 		.l2tun_offload = priv->hw_csum_l2tun,
 		.stats = {
@@ -831,7 +831,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
 	    (mb_len - RTE_PKTMBUF_HEADROOM)) {
 		;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		uint32_t size =
 			RTE_PKTMBUF_HEADROOM +
 			dev->data->dev_conf.rxmode.max_rx_pkt_len;
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 2df26842fbe4..19feec5e5202 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
 uint64_t
 mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+	uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (priv->hw_csum) {
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	}
 	if (priv->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (priv->hw_csum_l2tun) {
-		offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (priv->tso)
-			offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	}
 	return offloads;
 }
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts_comp_cd_init =
 			RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
 		.csum = priv->hw_csum &&
-			(offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					   DEV_TX_OFFLOAD_UDP_CKSUM |
-					   DEV_TX_OFFLOAD_TCP_CKSUM)),
+			(offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
 		.csum_l2tun = priv->hw_csum_l2tun &&
 			      (offloads &
-			       DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+			       RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
 		/* Enable Tx loopback for VF devices. */
 		.lb = !!priv->vf,
 		.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	else
 		dev_link.link_speed = link_speed;
 	priv->link_speed_capa = 0;
 	if (edata.supported & (SUPPORTED_1000baseT_Full |
 			       SUPPORTED_1000baseKX_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (edata.supported & SUPPORTED_10000baseKR_Full)
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (edata.supported & (SUPPORTED_40000baseKR4_Full |
 			       SUPPORTED_40000baseCR4_Full |
 			       SUPPORTED_40000baseSR4_Full |
 			       SUPPORTED_40000baseLR4_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		return ret;
 	}
 	dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
-				ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+				RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
 	sc = ecmd->link_mode_masks[0] |
 		((uint64_t)ecmd->link_mode_masks[1] << 32);
 	priv->link_speed_capa = 0;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	sc = ecmd->link_mode_masks[2] |
 		((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		  MLX5_BITSHIFT
 		       (ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 	dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa481e..c40cda8fcaf9 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1343,8 +1343,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1627,7 +1627,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	/*
 	 * If HW has bug working with tunnel packet decapsulation and
 	 * scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
-	 * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+	 * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
 	 */
 	if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
 		config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f84e061fe719..ff1c8e17460a 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1463,10 +1463,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
 			 struct rte_eth_udp_tunnel *udp_tunnel)
 {
 	MLX5_ASSERT(udp_tunnel != NULL);
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
 	    udp_tunnel->udp_port == 4789)
 		return 0;
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
 	    udp_tunnel->udp_port == 4790)
 		return 0;
 	return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e02714e23196..9588dff05180 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1226,7 +1226,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
 struct mlx5_flow_rss_desc {
 	uint32_t level;
 	uint32_t queue_num; /**< Number of entries in @p queue. */
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint64_t hash_fields; /* Verbs Hash fields. */
 	uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
 	uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
 #define MLX5_VPMD_DESCS_PER_LOOP      4
 
 /* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
-			       ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+			       RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 /* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
 			    MLX5_RSS_SRC_DST_ONLY))
 
 /* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	if ((dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+			RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
 			rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
 		DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
 			dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->default_txportconf.ring_size = 256;
 	info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
 	info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
-	if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
-		(priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+	if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+		(priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
 		info->default_rxportconf.nb_queues = 16;
 		info->default_txportconf.nb_queues = 16;
 		if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 4762fa0f5f88..7048fff3883e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
 	uint64_t rss_types;
 	/**<
 	 * RSS types bit-field associated with this node
-	 * (see ETH_RSS_* definitions).
+	 * (see RTE_ETH_RSS_* definitions).
 	 */
 	uint64_t node_flags;
 	/**<
@@ -272,7 +272,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
  * @param[in] pattern
  *   User flow pattern.
  * @param[in] types
- *   RSS types to expand (see ETH_RSS_* definitions).
+ *   RSS types to expand (see RTE_ETH_RSS_* definitions).
  * @param[in] graph
  *   Input graph to expand @p pattern according to @p types.
  * @param[in] graph_root_index
@@ -522,8 +522,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_IPV4,
 			 MLX5_EXPANSION_IPV6),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -531,11 +531,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -546,8 +546,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_GRE,
 			 MLX5_EXPANSION_NVGRE),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -555,11 +555,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_VXLAN] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -612,32 +612,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
 						  MLX5_EXPANSION_IPV4_TCP),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_IPV4_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
 						  MLX5_EXPANSION_IPV6_TCP,
 						  MLX5_EXPANSION_IPV6_FRAG_EXT),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_IPV6_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
 		.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1048,7 +1048,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
  * @param[in] tunnel
  *   1 when the hash field is for a tunnel item.
  * @param[in] layer_types
- *   ETH_RSS_* types.
+ *   RTE_ETH_RSS_* types.
  * @param[in] hash_fields
  *   Item hash fields.
  *
@@ -1601,14 +1601,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
 					  &rss->types,
 					  "some RSS protocols are not"
 					  " supported");
-	if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
-	    !(rss->types & ETH_RSS_IP))
+	if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+	    !(rss->types & RTE_ETH_RSS_IP))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L3 partial RSS requested but L3 RSS"
 					  " type not specified");
-	if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
-	    !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+	if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+	    !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L4 partial RSS requested but L4 RSS"
@@ -6364,8 +6364,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
 		 * mlx5_flow_hashfields_adjust() in advance.
 		 */
 		rss_desc->level = rss->level;
-		/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-		rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+		/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+		rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	}
 	flow->dev_handles = 0;
 	if (rss && rss->types) {
@@ -6989,7 +6989,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
 	if (!priv->reta_idx_n || !priv->rxqs_n) {
 		return 0;
 	}
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		action_rss.types = 0;
 	for (i = 0; i != priv->reta_idx_n; ++i)
 		queue[i] = (*priv->reta_idx)[i];
@@ -8657,7 +8657,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
 				(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 				NULL, "invalid port configuration");
-		if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+		if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 			ctx->action_rss.types = 0;
 		for (i = 0; i != priv->reta_idx_n; ++i)
 			ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 76ad53f2a1e8..d5d3a89374fe 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -328,18 +328,18 @@ enum mlx5_feature_name {
 
 /* Valid layer type for IPV4 RSS. */
 #define MLX5_IPV4_LAYER_TYPES \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-	 ETH_RSS_NONFRAG_IPV4_OTHER)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
 
 /* IBV hash source bits  for IPV4. */
 #define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
 
 /* Valid layer type for IPV6 RSS. */
 #define MLX5_IPV6_LAYER_TYPES \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
-	 ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX  | ETH_RSS_IPV6_TCP_EX | \
-	 ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX  | RTE_ETH_RSS_IPV6_TCP_EX | \
+	 RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 /* IBV hash source bits  for IPV6. */
 #define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 3f6f5dcfbadb..02a337dc2c93 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10934,9 +10934,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
 			else
 				dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10944,9 +10944,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
 			else
 				dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10960,11 +10960,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		return;
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
-		if (rss_types & ETH_RSS_UDP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_UDP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_UDP;
 			else
@@ -10972,11 +10972,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		}
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
-		if (rss_types & ETH_RSS_TCP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_TCP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_TCP;
 			else
@@ -14495,9 +14495,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4:
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV4;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV4;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV4;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14506,9 +14506,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV6:
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV6;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV6;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV6;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14517,11 +14517,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_UDP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_UDP:
-		if (rss_types & ETH_RSS_UDP) {
+		if (rss_types & RTE_ETH_RSS_UDP) {
 			*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
 			else
 				*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14530,11 +14530,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_TCP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_TCP:
-		if (rss_types & ETH_RSS_TCP) {
+		if (rss_types & RTE_ETH_RSS_TCP) {
 			*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
 			else
 				*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14682,8 +14682,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
 	origin = &shared_rss->origin;
 	origin->func = rss->func;
 	origin->level = rss->level;
-	/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-	origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+	/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+	origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	/* NULL RSS key indicates default RSS key. */
 	rss_key = !rss->key ? rss_hash_default_key : rss->key;
 	memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index b93fd4d2c962..ef286a13729c 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1834,7 +1834,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_TCP,
+					(rss_desc, tunnel, RTE_ETH_RSS_TCP,
 					 (IBV_RX_HASH_SRC_PORT_TCP |
 					  IBV_RX_HASH_DST_PORT_TCP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1847,7 +1847,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_UDP,
+					(rss_desc, tunnel, RTE_ETH_RSS_UDP,
 					 (IBV_RX_HASH_SRC_PORT_UDP |
 					  IBV_RX_HASH_DST_PORT_UDP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..1ee014776643 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
 		if (!(*priv->rxqs)[i])
 			continue;
 		(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
-			!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+			!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
 		++idx;
 	}
 	return 0;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce798986..0d6c58f47d89 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,23 +333,23 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_dev_config *config = &priv->config;
-	uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
-			     DEV_RX_OFFLOAD_TIMESTAMP |
-			     DEV_RX_OFFLOAD_JUMBO_FRAME |
-			     DEV_RX_OFFLOAD_RSS_HASH);
+	uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+			     RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+			     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+			     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 
 	if (!config->mprq.enabled)
 		offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
 	if (config->hw_fcs_strip)
-		offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	if (config->hw_csum)
-		offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
-			     DEV_RX_OFFLOAD_UDP_CKSUM |
-			     DEV_RX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
 	if (config->hw_vlan_strip)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (MLX5_LRO_SUPPORTED(dev))
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 	return offloads;
 }
 
@@ -363,7 +363,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 uint64_t
 mlx5_get_rx_port_offloads(void)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	return offloads;
 }
@@ -695,7 +695,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 				    dev->data->dev_conf.rxmode.offloads;
 
 		/* The offloads should be checked on rte_eth_dev layer. */
-		MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+		MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 			DRV_LOG(ERR, "port %u queue index %u split "
 				     "offload not configured",
@@ -1329,7 +1329,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	struct mlx5_dev_config *config = &priv->config;
 	uint64_t offloads = conf->offloads |
 			   dev->data->dev_conf.rxmode.offloads;
-	unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+	unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
 	unsigned int max_rx_pkt_len = lro_on_queue ?
 			dev->data->dev_conf.rxmode.max_lro_pkt_size :
 			dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -1431,7 +1431,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
 	MLX5_ASSERT(tmpl->rxq.rxseg_n &&
 		    tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
-	if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
 			" configured and no enough mbuf space(%u) to contain "
 			"the maximum RX packet length(%u) with head-room(%u)",
@@ -1475,7 +1475,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 			config->mprq.stride_size_n : mprq_stride_size;
 		tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
 		tmpl->rxq.strd_scatter_en =
-				!!(offloads & DEV_RX_OFFLOAD_SCATTER);
+				!!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
 				config->mprq.max_memcpy_len);
 		max_lro_size = RTE_MIN(max_rx_pkt_len,
@@ -1490,7 +1490,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
 		tmpl->rxq.sges_n = 0;
 		max_lro_size = max_rx_pkt_len;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		unsigned int sges_n;
 
 		if (lro_on_queue && first_mb_free_size <
@@ -1551,9 +1551,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
 	/* Toggle RX checksum offload if hardware supports it. */
-	tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+	tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* Configure Rx timestamp. */
-	tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+	tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
 	tmpl->rxq.timestamp_rx_flag = 0;
 	if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
 			&tmpl->rxq.timestamp_offset,
@@ -1562,11 +1562,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		goto error;
 	}
 	/* Configure VLAN stripping. */
-	tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	/* By default, FCS (CRC) is stripped by hardware. */
 	tmpl->rxq.crc_present = 0;
 	tmpl->rxq.lro = lro_on_queue;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (config->hw_fcs_strip) {
 			/*
 			 * RQs used for LRO-enabled TIRs should not be
@@ -1596,7 +1596,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		tmpl->rxq.crc_present << 2);
 	/* Save port ID. */
 	tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
-		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+		(!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
 
 /* HW checksum offload capabilities of vectorized Tx. */
 #define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
-	(DEV_TX_OFFLOAD_IPV4_CKSUM | \
-	 DEV_TX_OFFLOAD_UDP_CKSUM | \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 
 /*
  * Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
 	unsigned int diff = 0, olx = 0, i, m;
 
 	MLX5_ASSERT(priv);
-	if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 		/* We should support Multi-Segment Packets. */
 		olx |= MLX5_TXOFF_CONFIG_MULTI;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			   DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			   DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			   DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			   RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
 		/* We should support TCP Send Offload. */
 		olx |= MLX5_TXOFF_CONFIG_TSO;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support Software Parser for Tunnels. */
 		olx |= MLX5_TXOFF_CONFIG_SWP;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support IP/TCP/UDP Checksums. */
 		olx |= MLX5_TXOFF_CONFIG_CSUM;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
 		/* We should support VLAN insertion. */
 		olx |= MLX5_TXOFF_CONFIG_VLAN;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
 	    rte_mbuf_dynflag_lookup
 			(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
 	    rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index eb4d34ca559e..06cdeba662bc 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,35 +98,35 @@ uint64_t
 mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_VLAN_INSERT);
+	uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	struct mlx5_dev_config *config = &priv->config;
 
 	if (config->hw_csum)
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if (config->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (config->tx_pp)
-		offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+		offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
 	if (config->swp) {
 		if (config->hw_csum)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->tso)
-			offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
-				     DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	}
 	if (config->tunnel_en) {
 		if (config->hw_csum)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->tso)
-			offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
 	}
 	if (!config->mprq.enabled)
-		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	return offloads;
 }
 
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	unsigned int inlen_mode; /* Minimal required Inline data. */
 	unsigned int txqs_inline; /* Min Tx queues to enable inline. */
 	uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
-	bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-					    DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					    DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					    DEV_TX_OFFLOAD_IP_TNL_TSO |
-					    DEV_TX_OFFLOAD_UDP_TNL_TSO);
+	bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					    RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	bool vlan_inline;
 	unsigned int temp;
 
 	txq_ctrl->txq.fast_free =
-		!!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
-		   !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+		!!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		   !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
 		   !config->mprq.enabled);
 	if (config->txqs_inline == MLX5_ARG_UNSET)
 		txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	 * tx_burst routine.
 	 */
 	txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
-	vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+	vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
 		      !config->hw_vlan_insert;
 	/*
 	 * If there are few Tx queues it is prioritized
@@ -979,9 +979,9 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 		txq_ctrl->txq.tso_en = 1;
 	}
 	txq_ctrl->txq.tunnel_en = config->tunnel_en | config->swp;
-	txq_ctrl->txq.swp_en = ((DEV_TX_OFFLOAD_IP_TNL_TSO |
-				 DEV_TX_OFFLOAD_UDP_TNL_TSO |
-				 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &
+	txq_ctrl->txq.swp_en = ((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &
 				txq_ctrl->txq.offloads) && config->swp;
 }
 
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	unsigned int i;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
-				       DEV_RX_OFFLOAD_VLAN_STRIP);
+				       RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (!priv->config.hw_vlan_strip) {
 			DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 7e1df1c75147..578816fe0513 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -464,8 +464,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	DRV_LOG(DEBUG, "VLAN stripping is %ssupported",
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index a3ee15020466..37803fe34538 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 	struct mvneta_priv *priv = dev->data->dev_private;
 	struct neta_ppio_params *ppio_params;
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
 		MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		if (dev->data->nb_rx_queues > 1)
@@ -126,11 +126,11 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
 				 MRVL_NETA_ETH_HDRS_LEN;
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ppio_params = &priv->ppio_params;
@@ -155,10 +155,10 @@ static int
 mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 		   struct rte_eth_dev_info *info)
 {
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G;
 
 	info->max_rx_queues = MRVL_NETA_RXQ_MAX;
 	info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -510,28 +510,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 
 	neta_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..ccd47e8f4927 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,15 +54,15 @@
 #define MRVL_NETA_MRU_TO_MTU(mru)	((mru) - MRVL_NETA_HDRS_LEN)
 
 /** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
-			    DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+			    RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				    DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS)
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 				PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc09039..d28125ce9635 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -735,7 +735,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	rxq->priv = priv;
 	rxq->mp = mp;
 	rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
-			     DEV_RX_OFFLOAD_IPV4_CKSUM;
+			     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8da4..539e196b807e 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,16 +58,16 @@
 #define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
 
 /** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
-			  DEV_RX_OFFLOAD_JUMBO_FRAME | \
-			  DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			  RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+			  RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				  DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				  DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
-			  DEV_TX_OFFLOAD_MULTI_SEGS)
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 			      PKT_TX_TCP_CKSUM | \
@@ -443,14 +443,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
 
 	if (rss_conf->rss_hf == 0) {
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
-	} else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_2_TUPLE;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 1;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 0;
@@ -484,8 +484,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
-	    dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		return -EINVAL;
@@ -496,7 +496,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
 				 MRVL_PP2_ETH_HDRS_LEN;
 		if (dev->data->mtu > priv->max_mtu) {
@@ -508,7 +508,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -530,7 +530,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return ret;
 
 	if (dev->data->nb_rx_queues == 1 &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
 		priv->configured = 1;
@@ -632,7 +632,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		return 0;
 	}
 
@@ -653,7 +653,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -673,14 +673,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 	ret = pp2_ppio_disable(priv->ppio);
 	if (ret)
 		return ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -902,7 +902,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 	if (dev->data->all_multicast == 1)
 		mrvl_allmulticast_enable(dev);
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = mrvl_populate_vlan_table(dev, 1);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -938,11 +938,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 		priv->flow_ctrl = 0;
 	}
 
-	if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+	if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 		ret = mrvl_dev_set_link_up(dev);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to set link up");
-			dev->data->dev_link.link_status = ETH_LINK_DOWN;
+			dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 			goto out;
 		}
 	}
@@ -1211,30 +1211,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case SPEED_10000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 	pp2_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -1718,11 +1718,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G |
-			   ETH_LINK_SPEED_10G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G |
+			   RTE_ETH_LINK_SPEED_10G;
 
 	info->max_rx_queues = MRVL_PP2_RXQ_MAX;
 	info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1742,9 +1742,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 	info->tx_offload_capa = MRVL_TX_OFFLOADS;
 	info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
 
-	info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-				       ETH_RSS_NONFRAG_IPV4_TCP |
-				       ETH_RSS_NONFRAG_IPV4_UDP;
+	info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+				       RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				       RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	/* By default packets are dropped if no descriptors are available */
 	info->default_rxconf.rx_drop_en = 1;
@@ -1873,13 +1873,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 	int ret;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		MRVL_LOG(ERR, "VLAN stripping is not supported\n");
 		return -ENOTSUP;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = mrvl_populate_vlan_table(dev, 1);
 		else
 			ret = mrvl_populate_vlan_table(dev, 0);
@@ -1888,7 +1888,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			return ret;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
 		MRVL_LOG(ERR, "Extend VLAN not supported\n");
 		return -ENOTSUP;
 	}
@@ -2033,7 +2033,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 
 	rxq->priv = priv;
 	rxq->mp = mp;
-	rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+	rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2189,7 +2189,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+	fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
 
 	ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
 	if (ret) {
@@ -2198,10 +2198,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	if (en) {
-		if (fc_conf->mode == RTE_FC_NONE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+		if (fc_conf->mode == RTE_ETH_FC_NONE)
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 	}
 
 	return 0;
@@ -2247,19 +2247,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		rx_en = 1;
 		tx_en = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		rx_en = 0;
 		tx_en = 1;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		rx_en = 1;
 		tx_en = 0;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		rx_en = 0;
 		tx_en = 0;
 		break;
@@ -2336,11 +2336,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hash_type == PP2_PPIO_HASH_T_NONE)
 		rss_conf->rss_hf = 0;
 	else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
-		rss_conf->rss_hf = ETH_RSS_IPV4;
+		rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	return 0;
 }
@@ -3159,7 +3159,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
 	eth_dev->dev_ops = &mrvl_ops;
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_eth_dev_probing_finish(eth_dev);
 	return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..15645f1e5d2a 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
 #include "hn_nvs.h"
 #include "ndis.h"
 
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			    DEV_TX_OFFLOAD_TCP_CKSUM  | \
-			    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-			    DEV_TX_OFFLOAD_TCP_TSO    | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS | \
-			    DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			    RTE_ETH_TX_OFFLOAD_TCP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_TCP_TSO    | \
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+			    RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
-			    DEV_RX_OFFLOAD_VLAN_STRIP | \
-			    DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+			    RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NETVSC_ARG_LATENCY "latency"
 #define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
 	hn_rndis_get_linkspeed(hv);
 
 	link = (struct rte_eth_link) {
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_autoneg = ETH_LINK_SPEED_FIXED,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
 		.link_speed = hv->link_speed / 10000,
 	};
 
 	if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (old.link_status == link.link_status)
 		return 0;
 
 	PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
-		     (link.link_status == ETH_LINK_UP) ? "up" : "down");
+		     (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
 	struct hn_data *hv = dev->data->dev_private;
 	int rc;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = HN_MAX_XFER_LEN;
 	dev_info->max_mac_addrs  = 1;
 
 	dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
 	dev_info->flow_type_rss_offloads = hv->rss_offloads;
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 
 	dev_info->max_rx_queues = hv->max_queues;
 	dev_info->max_tx_queues = hv->max_queues;
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
 	/* Convert from DPDK RSS hash flags to NDIS hash flags */
 	hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
 
-	if (rss_conf->rss_hf & ETH_RSS_IPV4)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 		hv->rss_hash |= NDIS_HASH_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 		hv->rss_hash |=  NDIS_HASH_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
 		hv->rss_hash |=  NDIS_HASH_IPV6_EX;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
 
 	memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	if (hv->rss_hash & NDIS_HASH_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_IPV4;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_IPV6;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	return 0;
 }
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
 	if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	err = hn_rndis_conf_offload(hv, txmode->offloads,
 				    rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index e3f7e636d731..cacb30385404 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
 
 	hv->rss_offloads = 0;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
-		hv->rss_offloads |= ETH_RSS_IPV4
-			| ETH_RSS_NONFRAG_IPV4_TCP
-			| ETH_RSS_NONFRAG_IPV4_UDP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV4
+			| RTE_ETH_RSS_NONFRAG_IPV4_TCP
+			| RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
-		hv->rss_offloads |= ETH_RSS_IPV6
-			| ETH_RSS_NONFRAG_IPV6_TCP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6
+			| RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
-		hv->rss_offloads |= ETH_RSS_IPV6_EX
-			| ETH_RSS_IPV6_TCP_EX;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+			| RTE_ETH_RSS_IPV6_TCP_EX;
 
 	/* Commit! */
 	*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 		params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
 		    == NDIS_RXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
 			params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
 			params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
 		    == NDIS_TXCSUM_CAP_IP4)
 			params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
 			goto unsupported;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
 			params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
 			params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
 		else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
 		return error;
 	}
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
 	    == HN_NDIS_TXCSUM_CAP_IP4)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
 	    == HN_NDIS_TXCSUM_CAP_TCP4 &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
 	    == HN_NDIS_TXCSUM_CAP_TCP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 
 	if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
 	    (hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
 	    == HN_NDIS_LSOV2_CAP_IP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				    DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 
 	return 0;
 }
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 7e91d5984740..c2ff1c999869 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = dev->data->nb_rx_queues;
 	dev_info->max_tx_queues = dev->data->nb_tx_queues;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 
 	status.speed = MAC_SPEED_UNKNOWN;
 
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_status = ETH_LINK_DOWN;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_SPEED_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
 
 	if (internals->rxmac[0] != NULL) {
 		nc_rxmac_read_status(internals->rxmac[0], &status);
 
 		switch (status.speed) {
 		case MAC_SPEED_10G:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case MAC_SPEED_40G:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case MAC_SPEED_100G:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 		nc_rxmac_read_status(internals->rxmac[i], &status);
 
 		if (status.enabled && status.link_up) {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			break;
 		}
 	}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index d6d4ba9663c6..f19e9834848b 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
 	}
 	/* Timestamps are enabled when there is
 	 * key-value pair: enable_timestamp=1
-	 * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+	 * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
 	 */
 	if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
 		timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 1b4bc33593fb..c526c949a64c 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Checking TX mode */
 	if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Checking RX mode */
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
 	    !(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
@@ -359,20 +359,20 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
 			ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		hw->mtu = rxmode->max_rx_pkt_len;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
 
 	/* L2 broadcast */
@@ -384,13 +384,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hw->cap & NFP_NET_CFG_CTRL_LSO)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -398,7 +398,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	}
 
 	/* RX gather */
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -486,14 +486,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	int ret;
 
 	static const uint32_t ls_to_ethtool[] = {
-		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_1G]          = ETH_SPEED_NUM_1G,
-		[NFP_NET_CFG_STS_LINK_RATE_10G]         = ETH_SPEED_NUM_10G,
-		[NFP_NET_CFG_STS_LINK_RATE_25G]         = ETH_SPEED_NUM_25G,
-		[NFP_NET_CFG_STS_LINK_RATE_40G]         = ETH_SPEED_NUM_40G,
-		[NFP_NET_CFG_STS_LINK_RATE_50G]         = ETH_SPEED_NUM_50G,
-		[NFP_NET_CFG_STS_LINK_RATE_100G]        = ETH_SPEED_NUM_100G,
+		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_1G]          = RTE_ETH_SPEED_NUM_1G,
+		[NFP_NET_CFG_STS_LINK_RATE_10G]         = RTE_ETH_SPEED_NUM_10G,
+		[NFP_NET_CFG_STS_LINK_RATE_25G]         = RTE_ETH_SPEED_NUM_25G,
+		[NFP_NET_CFG_STS_LINK_RATE_40G]         = RTE_ETH_SPEED_NUM_40G,
+		[NFP_NET_CFG_STS_LINK_RATE_50G]         = RTE_ETH_SPEED_NUM_50G,
+		[NFP_NET_CFG_STS_LINK_RATE_100G]        = RTE_ETH_SPEED_NUM_100G,
 	};
 
 	PMD_DRV_LOG(DEBUG, "Link update");
@@ -505,15 +505,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
 	if (nn_link_status & NFP_NET_CFG_STS_LINK)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
 			 NFP_NET_CFG_STS_LINK_RATE_MASK;
 
 	if (nn_link_status >= RTE_DIM(ls_to_ethtool))
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		link.link_speed = ls_to_ethtool[nn_link_status];
 
@@ -702,26 +702,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = 1;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_UDP_CKSUM |
-					     DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
-					     DEV_TX_OFFLOAD_UDP_CKSUM |
-					     DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -758,25 +758,25 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 
 	/* All NFP devices support jumbo frames */
-	dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-		dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-						   ETH_RSS_NONFRAG_IPV4_TCP |
-						   ETH_RSS_NONFRAG_IPV4_UDP |
-						   ETH_RSS_IPV6 |
-						   ETH_RSS_NONFRAG_IPV6_TCP |
-						   ETH_RSS_NONFRAG_IPV6_UDP;
+		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+						   RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+						   RTE_ETH_RSS_IPV6 |
+						   RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 		dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
 		dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
 	}
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			       ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -847,7 +847,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	if (link.link_status)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 			    dev->data->port_id, link.link_speed,
-			    link.link_duplex == ETH_LINK_FULL_DUPLEX
+			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 			    ? "full-duplex" : "half-duplex");
 	else
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -964,9 +964,9 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	/* switch to jumbo mode if needed */
 	if ((uint32_t)mtu > RTE_ETHER_MTU)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->data->dev_conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* update max frame size */
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
@@ -990,12 +990,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	new_ctrl = 0;
 
 	/* Enable vlan strip if it is not configured yet */
-	if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    !(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
 
 	/* Disable vlan strip just if it is configured */
-	if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
 
@@ -1155,22 +1155,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
 
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1240,22 +1240,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	/* Propagate current RSS hash functions to caller */
 	rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 534a38c14f94..7a6a963bf6cc 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -140,7 +140,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index b697b55865cc..ac960328c7de 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -101,7 +101,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = link_up;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 		negotiate = true;
 
 	err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 
 	allowed_speeds = 0;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_1G;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_100M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_10M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
 
 	if (*link_speeds & ~allowed_speeds) {
 		PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = hw->mac.default_speeds;
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= NGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= NGBE_LINK_SPEED_100M_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= NGBE_LINK_SPEED_10M_FULL;
 	}
 
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_10M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_10M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			~ETH_LINK_SPEED_AUTONEG);
+			~RTE_ETH_LINK_SPEED_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 
 	intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case NGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 
 	case NGBE_LINK_SPEED_10M_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		lan_speed = 0;
 		break;
 
 	case NGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		lan_speed = 1;
 		break;
 
 	case NGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		lan_speed = 2;
 		break;
 	}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
 
 	rte_eth_linkstatus_get(dev, &link);
 
-	if (link.link_status == ETH_LINK_UP) {
+	if (link.link_status == RTE_ETH_LINK_UP) {
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
 		ngbe_dev_link_update(dev, 0);
 
 		/* likely to up */
-		if (link.link_status != ETH_LINK_UP)
+		if (link.link_status != RTE_ETH_LINK_UP)
 			/* handle it 1 sec later, wait it being stable */
 			timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
 		/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 508bafc12a14..789c6b9c4b9a 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
 	rte_spinlock_t rss_lock;
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
 			RTE_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[40];                /**< 40-byte hash key. */
 };
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return -EINVAL;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return 0;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -538,7 +538,7 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
 	internals->port_id = eth_dev->data->port_id;
 	rte_eth_random_addr(internals->eth_addr.addr_bytes);
 
-	internals->flow_type_rss_offloads =  ETH_RSS_PROTO_MASK;
+	internals->flow_type_rss_offloads =  RTE_ETH_RSS_PROTO_MASK;
 	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
 
 	rte_memcpy(internals->rss_key, default_rss_key, 40);
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9f4c0503b4d4..947dabdca2c5 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
 		octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
 			  (eth_dev->data->port_id),
 			  link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	switch (nic->speed) {
 	case OCTEONTX_LINK_SPEED_SGMII:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_XAUI:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RXAUI:
 	case OCTEONTX_LINK_SPEED_10G_R:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case OCTEONTX_LINK_SPEED_QSGMII:
-		link->link_speed = ETH_SPEED_NUM_5G;
+		link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case OCTEONTX_LINK_SPEED_40G_R:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RESERVE1:
 	case OCTEONTX_LINK_SPEED_RESERVE2:
 	default:
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		octeontx_log_err("incorrect link speed %d", nic->speed);
 		break;
 	}
 
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= OCCTX_TX_MULTI_SEG_F;
 
 	return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		flags |= OCCTX_RX_MULTI_SEG_F;
 		eth_dev->data->scattered_rx = 1;
 		/* If scatter mode is enabled, TX should also be in multi
 		 * seg mode, else memory leak will occur
 		 */
-		nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+	if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
 		PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
-		txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+		txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		octeontx_log_err("setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -534,13 +534,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		octeontx_log_err("Scatter mode is disabled");
 		return -EINVAL;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -553,9 +553,9 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 		return rc;
 
 	if (frame_size > OCCTX_L2_MAX_LEN)
-		nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* Update max_rx_pkt_len */
 	data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -582,7 +582,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
 
 	/* Setup scatter mode if needed by jumbo */
 	if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
-		nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+		nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
 		nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
 	}
@@ -854,10 +854,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
 	struct octeontx_nic *nic = octeontx_pmd_priv(dev);
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_40G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_40G;
 
 	/* Min/Max MTU supported */
 	dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1369,7 +1369,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
 	nic->ev_ports = 1;
 	nic->print_flag = -1;
 
-	data->dev_link.link_status = ETH_LINK_DOWN;
+	data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	data->dev_started = 0;
 	data->promiscuous = 0;
 	data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..7215039507c3 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,24 +55,24 @@
 #define OCCTX_MAX_MTU		(OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
 
 #define OCTEONTX_RX_OFFLOADS		(				   \
-					 DEV_RX_OFFLOAD_CHECKSUM	 | \
-					 DEV_RX_OFFLOAD_SCTP_CKSUM       | \
-					 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_RX_OFFLOAD_SCATTER	         | \
-					 DEV_RX_OFFLOAD_SCATTER		 | \
-					 DEV_RX_OFFLOAD_JUMBO_FRAME	 | \
-					 DEV_RX_OFFLOAD_VLAN_FILTER)
+					 RTE_ETH_RX_OFFLOAD_CHECKSUM	 | \
+					 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM       | \
+					 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER	         | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER		 | \
+					 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME	 | \
+					 RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 
 #define OCTEONTX_TX_OFFLOADS		(				   \
-					 DEV_TX_OFFLOAD_MBUF_FAST_FREE	 | \
-					 DEV_TX_OFFLOAD_MT_LOCKFREE	 | \
-					 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_IPV4_CKSUM	 | \
-					 DEV_TX_OFFLOAD_TCP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_SCTP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_MULTI_SEGS)
+					 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	 | \
+					 RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	 | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_TCP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 static inline struct octeontx_nic *
 octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			rc = octeontx_vlan_hw_filter(nic, true);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
 		} else {
 			rc = octeontx_vlan_hw_filter(nic, false);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
 		}
 	}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
 
 	TAILQ_INIT(&nic->vlan_info.fltr_tbl);
 
-	rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 	if (rc)
 		octeontx_log_err("Failed to set vlan offload rc=%d", rc);
 
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
 		return rc;
 
 	if (conf.rx_pause && conf.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (conf.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (conf.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	/* low_water & high_water values are in Bytes */
 	fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	conf.high_water = fc_conf->high_water;
 	conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e7c..ebe503438144 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
 
 	if (otx2_dev_is_vf(dev) ||
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
 
 	/* TSO not supported for earlier chip revisions */
 	if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
-		capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-			  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	return capa;
 }
 
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
 	req->npa_func = otx2_npa_pf_func_get();
 	req->sso_func = otx2_sso_pf_func_get();
 	req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
 		req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
 	}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
 
 	aq->rq.sso_ena = 0;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		aq->rq.ipsech_ena = 1;
 
 	aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -664,7 +664,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev)) {
 		rc = otx2_nix_raw_clock_tsc_conv(dev);
 		if (rc) {
@@ -691,7 +691,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -706,29 +706,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-			(dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+			(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_QINQ_STRIP))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
 		flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	if (!dev->ptype_disable)
@@ -767,43 +767,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		    DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
@@ -913,8 +913,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 		/* Setting up the rx[tx]_offload_flags due to change
 		 * in rx[tx]_offloads.
@@ -1857,21 +1857,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
 
 	if (otx2_dev_is_Ax(dev) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	    (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		otx2_err("Outer IP and SCTP checksum unsupported");
 		goto fail_configure;
 	}
@@ -2244,7 +2244,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled in PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev))
 		otx2_nix_timesync_enable(eth_dev);
 	else
@@ -2573,8 +2573,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
 	rc = otx2_eth_sec_ctx_create(eth_dev);
 	if (rc)
 		goto free_mac_addrs;
-	dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
-	dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+	dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+	dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
 
 	/* Initialize rte-flow */
 	rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30bda..04e43b63c192 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,44 +117,44 @@
 #define CQ_TIMER_THRESH_DEFAULT	0xAULL /* ~1usec i.e (0xA * 100nsec) */
 #define CQ_TIMER_THRESH_MAX     255
 
-#define NIX_RSS_L3_L4_SRC_DST  (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
-				| ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST  (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+				| RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
-#define NIX_RSS_OFFLOAD		(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
-				 ETH_RSS_TCP | ETH_RSS_SCTP | \
-				 ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
-				 NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
-				 ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD		(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+				 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+				 RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+				 NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+				 RTE_ETH_RSS_C_VLAN)
 
 #define NIX_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE	| \
-	DEV_TX_OFFLOAD_MT_LOCKFREE	| \
-	DEV_TX_OFFLOAD_VLAN_INSERT	| \
-	DEV_TX_OFFLOAD_QINQ_INSERT	| \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
-	DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_CKSUM	| \
-	DEV_TX_OFFLOAD_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_TSO		| \
-	DEV_TX_OFFLOAD_VXLAN_TNL_TSO    | \
-	DEV_TX_OFFLOAD_GENEVE_TNL_TSO   | \
-	DEV_TX_OFFLOAD_GRE_TNL_TSO	| \
-	DEV_TX_OFFLOAD_MULTI_SEGS	| \
-	DEV_TX_OFFLOAD_IPV4_CKSUM)
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	| \
+	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	| \
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_QINQ_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO		| \
+	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    | \
+	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   | \
+	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	| \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS	| \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 
 #define NIX_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM		| \
-	DEV_RX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_RX_OFFLOAD_SCATTER		| \
-	DEV_RX_OFFLOAD_JUMBO_FRAME	| \
-	DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_RX_OFFLOAD_VLAN_STRIP	| \
-	DEV_RX_OFFLOAD_VLAN_FILTER	| \
-	DEV_RX_OFFLOAD_QINQ_STRIP	| \
-	DEV_RX_OFFLOAD_TIMESTAMP	| \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM		| \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_RX_OFFLOAD_SCATTER		| \
+	RTE_ETH_RX_OFFLOAD_JUMBO_FRAME	| \
+	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER	| \
+	RTE_ETH_RX_OFFLOAD_QINQ_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_TIMESTAMP	| \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NIX_DEFAULT_RSS_CTX_GROUP  0
 #define NIX_DEFAULT_RSS_MCAM_IDX  -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
-		val = ETH_RSS_RETA_SIZE_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
-		val = ETH_RSS_RETA_SIZE_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
-		val = ETH_RSS_RETA_SIZE_256;
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+		val = RTE_ETH_RSS_RETA_SIZE_64;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+		val = RTE_ETH_RSS_RETA_SIZE_128;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+		val = RTE_ETH_RSS_RETA_SIZE_256;
 	else
 		val = NIX_RSS_RETA_SIZE;
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5a4501208e9e..41761085e156 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -29,11 +29,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
 		return -EINVAL;
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -59,9 +59,9 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 		return rc;
 
 	if (frame_size > NIX_L2_MAX_LEN)
-		dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* Update max_rx_pkt_len */
 	data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -590,17 +590,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 	};
 
 	/* Auto negotiation disabled */
-	devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
-		devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+		devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
 
 		/* 50G and 100G to be supported for board version C0
 		 * and above.
 		 */
 		if (!otx2_dev_is_Ax(dev))
-			devinfo->speed_capa |= ETH_LINK_SPEED_50G |
-					       ETH_LINK_SPEED_100G;
+			devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+					       RTE_ETH_LINK_SPEED_100G;
 	}
 
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index c2a36883cbf2..e1654ef5b284 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -890,8 +890,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
 			 !RTE_IS_POWER_OF_2(sa_width));
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return 0;
 
 	if (rte_security_dynfield_register() < 0)
@@ -933,8 +933,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
 	uint16_t port = eth_dev->data->port_id;
 	char name[RTE_MEMZONE_NAMESIZE];
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
 		goto err_exit;
 	}
 
-	if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rc = flow_update_sec_tt(dev, actions);
 		if (rc != 0) {
 			rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 	int rc;
 
 	if (otx2_dev_is_lbk(dev)) {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		return 0;
 	}
 
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		goto done;
 
 	if (rsp->rx_pause && rsp->tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rsp->rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (rsp->tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 done:
 	return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (otx2_dev_is_Ax(dev) &&
 	    (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
-	    (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_conf.mode =
-				(fc_conf.mode == RTE_FC_FULL ||
-				fc_conf.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_conf.mode == RTE_ETH_FC_FULL ||
+				fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		return 0;
 
 	memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 63a33142a579..3fe6727f1d2a 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
 					  attr, "No support of RSS in egress");
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ACTION,
 					  act, "multi-queue mode is disabled");
@@ -1188,7 +1188,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
 		 *FLOW_KEY_ALG index. So, till we update the action with
 		 *flow_key_alg index, set the action to drop.
 		 */
-		if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 			flow->npc_action = NIX_RX_ACTIONOP_DROP;
 		else
 			flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		otx2_info("Port %d: Link Up - speed %u Mbps - %s",
 			  (int)(eth_dev->data->port_id),
 			  (uint32_t)link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 
 	eth_link.link_status = link->link_up;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 static int
 lbk_link_update(struct rte_eth_link *link)
 {
-	link->link_status = ETH_LINK_UP;
-	link->link_speed = ETH_SPEED_NUM_100G;
-	link->link_autoneg = ETH_LINK_FIXED;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = RTE_ETH_LINK_UP;
+	link->link_speed = RTE_ETH_SPEED_NUM_100G;
+	link->link_autoneg = RTE_ETH_LINK_FIXED;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	return 0;
 }
 
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
 
 	link->link_status = rsp->link_info.link_up;
 	link->link_speed = rsp->link_info.speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (rsp->link_info.full_duplex)
 		link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 
 	/* 50G and 100G to be supported for board version C0 and above */
 	if (!otx2_dev_is_Ax(dev)) {
-		if (link_speeds & ETH_LINK_SPEED_100G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 			link_speed = 100000;
-		if (link_speeds & ETH_LINK_SPEED_50G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 			link_speed = 50000;
 	}
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed = 40000;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed = 25000;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed = 20000;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed = 10000;
-	if (link_speeds & ETH_LINK_SPEED_5G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 		link_speed = 5000;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed = 1000;
 
 	return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 static inline uint8_t
 nix_parse_eth_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-			(link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+			(link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
 	cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
 	if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
 		cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
-		cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+		cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		return cgx_change_mode(dev, &cfg);
 	}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
 
 		action = NIX_RX_ACTIONOP_UCAST;
 
-		if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+		if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 			action = NIX_RX_ACTIONOP_RSS;
 			action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 		}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	/* System time should be already on by default */
 	nix_start_timecounters(eth_dev);
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 	if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
 		return -EINVAL;
 
-	dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
 
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..cbc6d67a7fcf 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
 }
 
 #define RSS_IPV4_ENABLE ( \
-			  ETH_RSS_IPV4 | \
-			  ETH_RSS_FRAG_IPV4 | \
-			  ETH_RSS_NONFRAG_IPV4_UDP | \
-			  ETH_RSS_NONFRAG_IPV4_TCP | \
-			  ETH_RSS_NONFRAG_IPV4_SCTP)
+			  RTE_ETH_RSS_IPV4 | \
+			  RTE_ETH_RSS_FRAG_IPV4 | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE ( \
-			  ETH_RSS_IPV6 | \
-			  ETH_RSS_FRAG_IPV6 | \
-			  ETH_RSS_NONFRAG_IPV6_UDP | \
-			  ETH_RSS_NONFRAG_IPV6_TCP | \
-			  ETH_RSS_NONFRAG_IPV6_SCTP)
+			  RTE_ETH_RSS_IPV6 | \
+			  RTE_ETH_RSS_FRAG_IPV6 | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE ( \
-			     ETH_RSS_IPV6_EX | \
-			     ETH_RSS_IPV6_TCP_EX | \
-			     ETH_RSS_IPV6_UDP_EX)
+			     RTE_ETH_RSS_IPV6_EX | \
+			     RTE_ETH_RSS_IPV6_TCP_EX | \
+			     RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS   3
 
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->rss_info.nix_rss = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 		otx2_nix_rss_set_key(dev, rss_conf->rss_key,
 				     (uint32_t)rss_conf->rss_key_len);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	int rc;
 
 	/* Skip further configuration if selected mode is not RSS */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
 		return 0;
 
 	/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	}
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952dc..986902287b67 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
 	/* For PTP enabled, scalar rx function should be chosen as most of the
 	 * PTP apps are implemented to rx burst 1 pkt.
 	 */
-	if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		pick_rx_func(eth_dev, nix_eth_rx_burst);
 	else
 		pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 
 	/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b913..c60190074926 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
 	else
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 
 	rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
 
 	action = NIX_RX_ACTIONOP_UCAST;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		action = NIX_RX_ACTIONOP_RSS;
 		action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 	}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
 	 * Take offset from LA since in case of untagged packet,
 	 * lbptr is zero.
 	 */
-	if (type == ETH_VLAN_TYPE_OUTER) {
+	if (type == RTE_ETH_VLAN_TYPE_OUTER) {
 		vtag_action.act.vtag0_def = vtag_index;
 		vtag_action.act.vtag0_lid = NPC_LID_LA;
 		vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
 		if (vlan->strip_on ||
 		    (vlan->qinq_on && !vlan->qinq_before_def)) {
 			if (eth_dev->data->dev_conf.rxmode.mq_mode ==
-								ETH_MQ_RX_RSS)
+								RTE_ETH_MQ_RX_RSS)
 				vlan->def_rx_mcam_ent.action |=
 							NIX_RX_ACTIONOP_RSS;
 			else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 
 	rxmode = &eth_dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, true);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, false);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, true, 0);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, false, 0);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
 		if (!dev->vlan_info.qinq_on) {
-			offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, true);
 			if (rc)
 				goto done;
 		}
 	} else {
 		if (dev->vlan_info.qinq_on) {
-			offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, false);
 			if (rc)
 				goto done;
 		}
 	}
 
-	if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-			DEV_RX_OFFLOAD_QINQ_STRIP)) {
+	if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
 		dev->rx_offloads |= offloads;
 		dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 		otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
 
 	tpid_cfg->tpid = tpid;
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
 	else
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	if (rc)
 		return rc;
 
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		dev->vlan_info.outer_vlan_tpid = tpid;
 	else
 		dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev,       uint16_t vlan_id, int on)
 		vlan->outer_vlan_idx = 0;
 	}
 
-	rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+	rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					      vtag_index, on);
 	if (rc < 0) {
 		printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
 	} else {
 		/* Reinstall all mcam entries now if filter offload is set */
 		if (eth_dev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			nix_vlan_reinstall_vlan_filters(eth_dev);
 	}
 
 	mask =
-	    ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	    RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	rc = otx2_nix_vlan_offload_set(eth_dev, mask);
 	if (rc) {
 		otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index a243683d61d3..7bfa6098e230 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,15 +33,15 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	otx_epvf = OTX_EP_DEV(eth_dev);
 
-	devinfo->speed_capa = ETH_LINK_SPEED_10G;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	devinfo->max_rx_queues = otx_epvf->max_rx_queues;
 	devinfo->max_tx_queues = otx_epvf->max_tx_queues;
 
 	devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
 	devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
-	devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
-	devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
-	devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+	devinfo->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
+	devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
 
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..77593111f141 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)
@@ -954,13 +954,13 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
 	droq_pkt->l4_len = hdr_lens.l4_len;
 
 	if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
-	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
+	    !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)) {
 		rte_pktmbuf_free(droq_pkt);
 		goto oq_read_fail;
 	}
 
 	if (droq_pkt->nb_segs > 1 &&
-	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		rte_pktmbuf_free(droq_pkt);
 		goto oq_read_fail;
 	}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a432a..13d18e875444 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -135,10 +135,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -655,7 +655,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -710,7 +710,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index feec4d10a26e..a74f27bf8158 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
 static struct pfe *g_pfe;
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 /* TODO: make pfe_svr a runtime option.
  * Driver should be able to get the SVR
@@ -613,9 +613,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	}
 
 	link.link_status = lstatus;
-	link.link_speed = ETH_LINK_SPEED_1G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_speed = RTE_ETH_LINK_SPEED_1G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	pfe_eth_atomic_write_link_status(dev, &link);
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t;      /* In DWORDS !!! */
 struct eth_phy_cfg {
 /* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
 	u32 speed;
-#define ETH_SPEED_AUTONEG   0
-#define ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG   0
+#define RTE_ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
 
 	u32 pause;      /* bitmask */
 #define ETH_PAUSE_NONE		0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 323d46e6ebb2..0af2f919e9d5 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
 	}
 
 	use_tx_offload = !!(tx_offloads &
-			    (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
-			     DEV_TX_OFFLOAD_TCP_TSO | /* tso */
-			     DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+			    (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
 
 	if (use_tx_offload) {
 		DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			(void)qede_vlan_stripping(eth_dev, 1);
 		else
 			(void)qede_vlan_stripping(eth_dev, 0);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN filtering kicks in when a VLAN is added */
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			qede_vlan_filter_set(eth_dev, 0, 1);
 		} else {
 			if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 				 * enabled
 				 */
 				eth_dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_VLAN_FILTER;
+						RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			} else {
 				qede_vlan_filter_set(eth_dev, 0, 0);
 			}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure TPA parameters */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (qede_enable_tpa(eth_dev, true))
 			return -EINVAL;
 		/* Enable scatter mode for LRO */
 		if (!eth_dev->data->scattered_rx)
-			rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	}
 
 	/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	 * Also, we would like to retain similar behavior in PF case, so we
 	 * don't do PF/VF specific check here.
 	 */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		if (qede_config_rss(eth_dev))
 			goto err;
 
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* We need to have min 1 RX queue.There is no min check in
 	 * rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_NOTICE(edev, false,
 			  "Invalid devargs supplied, requested change will not take effect\n");
 
-	if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
-	      rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+	if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+	      rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
 		DP_ERR(edev, "Unsupported multi-queue mode\n");
 		return -ENOTSUP;
 	}
@@ -1313,12 +1313,12 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	}
 
 	/* If jumbo enabled adjust MTU */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		eth_dev->data->mtu =
 			eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
 			RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 
 	if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1327,8 +1327,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	qdev->mtu = eth_dev->data->mtu;
 
 	/* Enable VLAN offloads by default */
-	ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK  |
-					     ETH_VLAN_FILTER_MASK);
+	ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK  |
+					     RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -1391,35 +1391,35 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
 	dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
 	dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
-	dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_RX_OFFLOAD_UDP_CKSUM	|
-				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_RX_OFFLOAD_TCP_LRO	|
-				     DEV_RX_OFFLOAD_KEEP_CRC    |
-				     DEV_RX_OFFLOAD_SCATTER	|
-				     DEV_RX_OFFLOAD_JUMBO_FRAME |
-				     DEV_RX_OFFLOAD_VLAN_FILTER |
-				     DEV_RX_OFFLOAD_VLAN_STRIP  |
-				     DEV_RX_OFFLOAD_RSS_HASH);
+	dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO	|
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+				     RTE_ETH_RX_OFFLOAD_SCATTER	|
+				     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+				     RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				     RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 	dev_info->rx_queue_offload_capa = 0;
 
 	/* TX offloads are on a per-packet basis, so it is applicable
 	 * to both at port and queue levels.
 	 */
-	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
-				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_TX_OFFLOAD_UDP_CKSUM	|
-				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_TX_OFFLOAD_MULTI_SEGS  |
-				     DEV_TX_OFFLOAD_TCP_TSO	|
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+	dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT	|
+				     RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO	|
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
 	dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	};
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1431,17 +1431,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
-		speed_cap |= ETH_LINK_SPEED_1G;
+		speed_cap |= RTE_ETH_LINK_SPEED_1G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
-		speed_cap |= ETH_LINK_SPEED_10G;
+		speed_cap |= RTE_ETH_LINK_SPEED_10G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
-		speed_cap |= ETH_LINK_SPEED_25G;
+		speed_cap |= RTE_ETH_LINK_SPEED_25G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
-		speed_cap |= ETH_LINK_SPEED_40G;
+		speed_cap |= RTE_ETH_LINK_SPEED_40G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
-		speed_cap |= ETH_LINK_SPEED_50G;
+		speed_cap |= RTE_ETH_LINK_SPEED_50G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
-		speed_cap |= ETH_LINK_SPEED_100G;
+		speed_cap |= RTE_ETH_LINK_SPEED_100G;
 	dev_info->speed_capa = speed_cap;
 
 	return 0;
@@ -1468,10 +1468,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	/* Link Mode */
 	switch (q_link.duplex) {
 	case QEDE_DUPLEX_HALF:
-		link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case QEDE_DUPLEX_FULL:
-		link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case QEDE_DUPLEX_UNKNOWN:
 	default:
@@ -1480,11 +1480,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	link.link_duplex = link_duplex;
 
 	/* Link Status */
-	link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	/* AN */
 	link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
-			     ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+			     RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 
 	DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
 		link.link_speed, link.link_duplex,
@@ -2019,12 +2019,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Pause is assumed to be supported (SUPPORTED_Pause) */
-	if (fc_conf->mode == RTE_FC_FULL)
+	if (fc_conf->mode == RTE_ETH_FC_FULL)
 		params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
 					QED_LINK_PAUSE_RX_ENABLE);
-	if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
-	if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
 
 	params.link_up = true;
@@ -2048,13 +2048,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 
 	if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
 					 QED_LINK_PAUSE_TX_ENABLE))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2095,14 +2095,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
 static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
 {
 	*rss_caps = 0;
-	*rss_caps |= (hf & ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
 }
 
 int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2228,7 +2228,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 	uint8_t entry;
 	int rc = 0;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported by hardware\n",
 		       reta_size);
 		return -EINVAL;
@@ -2289,7 +2289,7 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
 	uint16_t i, idx, shift;
 	uint8_t entry;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported\n",
 		       reta_size);
 		return -EINVAL;
@@ -2369,9 +2369,9 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 		}
 	}
 	if (frame_size > QEDE_ETH_MAX_LEN)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->data->dev_conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	if (!dev->data->dev_started && restart) {
 		qede_dev_start(dev);
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..ceb47c17d0d6 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
 
 	/* check FDIR modes */
 	switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 					ECORE_TUNN_CLSS_MAC_VLAN, false);
 
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 
 		qdev->vxlan.udp_port = udp_port;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 298f4e3e4273..144dfef269f3 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
 	bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
 	/* cache align the mbuf size to simplfy rx_buf_size calculation */
 	bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)	||
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	||
 	    (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
 		if (!dev->data->scattered_rx) {
 			DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
 #define QEDE_MAX_ETHER_HDR_LEN	(RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
 #define QEDE_ETH_MAX_LEN	(RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
 
-#define QEDE_RSS_OFFLOAD_ALL    (ETH_RSS_IPV4			|\
-				 ETH_RSS_NONFRAG_IPV4_TCP	|\
-				 ETH_RSS_NONFRAG_IPV4_UDP	|\
-				 ETH_RSS_IPV6			|\
-				 ETH_RSS_NONFRAG_IPV6_TCP	|\
-				 ETH_RSS_NONFRAG_IPV6_UDP	|\
-				 ETH_RSS_VXLAN			|\
-				 ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL    (RTE_ETH_RSS_IPV4			|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_UDP	|\
+				 RTE_ETH_RSS_IPV6			|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_UDP	|\
+				 RTE_ETH_RSS_VXLAN			|\
+				 RTE_ETH_RSS_GENEVE)
 
 #define QEDE_RXTX_MAX(qdev) \
 	(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 1faf38a714cf..8d1ef5fb22bc 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -110,21 +110,21 @@ static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
 	dev->data->dev_started = 0;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_down(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_up(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = 1;
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 274a98e228e4..d93f9d2418b9 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -81,13 +81,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 {
 	uint32_t phy_caps = 0;
 
-	if (~speeds & ETH_LINK_SPEED_FIXED) {
+	if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		phy_caps |= (1 << EFX_PHY_CAP_AN);
 		/*
 		 * If no speeds are specified in the mask, any supported
 		 * may be negotiated
 		 */
-		if (speeds == ETH_LINK_SPEED_AUTONEG)
+		if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 			phy_caps |=
 				(1 << EFX_PHY_CAP_1000FDX) |
 				(1 << EFX_PHY_CAP_10000FDX) |
@@ -96,17 +96,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 				(1 << EFX_PHY_CAP_50000FDX) |
 				(1 << EFX_PHY_CAP_100000FDX);
 	}
-	if (speeds & ETH_LINK_SPEED_1G)
+	if (speeds & RTE_ETH_LINK_SPEED_1G)
 		phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
-	if (speeds & ETH_LINK_SPEED_10G)
+	if (speeds & RTE_ETH_LINK_SPEED_10G)
 		phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
-	if (speeds & ETH_LINK_SPEED_25G)
+	if (speeds & RTE_ETH_LINK_SPEED_25G)
 		phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
-	if (speeds & ETH_LINK_SPEED_40G)
+	if (speeds & RTE_ETH_LINK_SPEED_40G)
 		phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
-	if (speeds & ETH_LINK_SPEED_50G)
+	if (speeds & RTE_ETH_LINK_SPEED_50G)
 		phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
-	if (speeds & ETH_LINK_SPEED_100G)
+	if (speeds & RTE_ETH_LINK_SPEED_100G)
 		phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
 
 	return phy_caps;
@@ -337,10 +337,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
 			tx_offloads |= txq_info->offloads;
 	}
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
 	else
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -827,7 +827,7 @@ sfc_attach(struct sfc_adapter *sa)
 	sa->priv.shared->tunnel_encaps =
 		encp->enc_tunnel_encapsulations_supported;
 
-	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
 			  encp->enc_tso_v3_enabled;
 		if (!sa->tso)
@@ -836,8 +836,8 @@ sfc_attach(struct sfc_adapter *sa)
 
 	if (sa->tso &&
 	    (sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
-	     (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+	     (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
 		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
 				encp->enc_tso_v3_enabled;
 		if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d4cb96881cd2..ca8774ad0950 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -916,11 +916,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
 	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS |
 				  SFC_DP_RX_FEAT_INTR,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_RX_OFFLOAD_SCATTER |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_SCATTER |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.get_dev_info		= sfc_ef100_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_rx_qsize_up_rings,
 	.qcreate		= sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 522e9a0d3470..7c91ee3fcb53 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -942,16 +942,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_MULTI_SEGS |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_FLOW_FLAG |
 				  SFC_DP_RX_FEAT_FLOW_MARK,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.queue_offload_capa	= 0,
 	.get_dev_info		= sfc_ef10_essb_rx_get_dev_info,
 	.pool_ops_supported	= sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS |
 				  SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.get_dev_info		= sfc_ef10_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_rx_qsize_up_rings,
 	.qcreate		= sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 	if (txq->sw_ring == NULL)
 		goto fail_sw_ring_alloc;
 
-	if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			      DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+	if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			      RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
 		txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
 					      info->txq_entries,
 					      SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
 		.type		= SFC_DP_TX,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MBUF_FAST_FREE,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3ad..8734bca4876f 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -102,19 +102,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vfs = sa->sriov.num_vfs;
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->max_rx_queues = sa->rxq_max;
 	dev_info->max_tx_queues = sa->txq_max;
@@ -142,8 +142,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
 				    dev_info->tx_queue_offload_capa;
 
-	if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf.offloads |= txq_offloads_def;
 
@@ -912,16 +912,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	switch (link_fc) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case EFX_FCNTL_RESPOND:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case EFX_FCNTL_GENERATE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	default:
 		sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -952,16 +952,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		fcntl = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		fcntl = EFX_FCNTL_RESPOND;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		fcntl = EFX_FCNTL_GENERATE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
 		break;
 	default:
@@ -1070,7 +1070,7 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 	 */
 	if (mtu > RTE_ETHER_MTU) {
 		struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-		rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	}
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
@@ -1247,7 +1247,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 	qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
 	qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
 	if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
-		qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		qinfo->scattered_rx = 1;
 	}
 	qinfo->nb_desc = rxq_info->entries;
@@ -1472,9 +1472,9 @@ static efx_tunnel_protocol_t
 sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
 {
 	switch (rte_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		return EFX_TUNNEL_PROTOCOL_VXLAN;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		return EFX_TUNNEL_PROTOCOL_GENEVE;
 	default:
 		return EFX_TUNNEL_NPROTOS;
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 4f5993a68d23..dc2cdfea13c4 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -390,7 +390,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
 		.inner_type = RTE_BE16(0xffff),
 	};
 
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb8175..dea5272a79bc 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -387,7 +387,7 @@ sfc_port_configure(struct sfc_adapter *sa)
 
 	sfc_log_init(sa, "entry");
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		port->pdu = rxmode->max_rx_pkt_len;
 	else
 		port->pdu = EFX_MAC_PDU(dev_data->mtu);
@@ -577,66 +577,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
 
 	memset(link_info, 0, sizeof(*link_info));
 	if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
-		link_info->link_status = ETH_LINK_DOWN;
+		link_info->link_status = RTE_ETH_LINK_DOWN;
 	else
-		link_info->link_status = ETH_LINK_UP;
+		link_info->link_status = RTE_ETH_LINK_UP;
 
 	switch (link_mode) {
 	case EFX_LINK_10HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_10FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_100FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_1000HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_1000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_10000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_25000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_25G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_25G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_40000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_40G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_40G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_50000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_50G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_50G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	default:
 		SFC_ASSERT(B_FALSE);
 		/* FALLTHROUGH */
 	case EFX_LINK_UNKNOWN:
 	case EFX_LINK_DOWN:
-		link_info->link_speed  = ETH_SPEED_NUM_NONE;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_NONE;
 		link_info->link_duplex = 0;
 		break;
 	}
 
-	link_info->link_autoneg = ETH_LINK_AUTONEG;
+	link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 int
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9e0..a83b47a8d111 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -647,9 +647,9 @@ struct sfc_dp_rx sfc_efx_rx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_RX_EFX,
 	},
 	.features		= SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.qsize_up_rings		= sfc_efx_rx_qsize_up_rings,
 	.qcreate		= sfc_efx_rx_qcreate,
 	.qdestroy		= sfc_efx_rx_qdestroy,
@@ -930,7 +930,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (encp->enc_tunnel_encapsulations_supported == 0)
-		no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	return ~no_caps;
 }
@@ -940,7 +940,7 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
 {
 	uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
 
-	caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	caps |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	return caps & sfc_rx_get_offload_mask(sa);
 }
@@ -1141,7 +1141,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 
 	if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
 				  encp->enc_rx_prefix_size,
-				  (offloads & DEV_RX_OFFLOAD_SCATTER),
+				  (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
 		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1167,15 +1167,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
 	rxq_info->type_flags |=
-		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
+		(offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
 	if ((encp->enc_tunnel_encapsulations_supported != 0) &&
 	    (sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
-	     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+	     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
 
-	if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+	if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
 
 	rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_RX, sw_index,
@@ -1205,7 +1205,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	rxq_info->refill_mb_pool = mb_pool;
 
 	if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
-	    (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	    (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
 	else
 		rxq_info->rxq_flags = 0;
@@ -1301,19 +1301,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
  * Mapping between RTE RSS hash functions and their EFX counterparts.
  */
 static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
-	{ ETH_RSS_NONFRAG_IPV4_TCP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	  EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	  EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
 	  EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
 	  EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
-	{ ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV4, 2TUPLE) },
-	{ ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
-	  ETH_RSS_IPV6_EX,
+	{ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+	  RTE_ETH_RSS_IPV6_EX,
 	  EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV6, 2TUPLE) }
 };
@@ -1633,10 +1633,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	int rc = 0;
 
 	switch (rxmode->mq_mode) {
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/* No special checks are required */
 		break;
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
 			sfc_err(sa, "RSS is not available");
 			rc = EINVAL;
@@ -1653,16 +1653,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	 * so unsupported offloads cannot be added as the result of
 	 * below check.
 	 */
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
-	    (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+	    (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
 		sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
-		rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	}
 
-	if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-	    (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+	    (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
-		rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 	}
 
 	return rc;
@@ -1808,7 +1808,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	}
 
 configure_rss:
-	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+	rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
 			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 49b239f4d261..359acc71a47f 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (!encp->enc_hw_tx_insert_vlan_enabled)
-		no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (!encp->enc_tunnel_encapsulations_supported)
-		no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	if (!sa->tso)
-		no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	return ~no_caps;
 }
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
 	}
 
 	/* We either perform both TCP and UDP offload, or no offload at all */
-	if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
-	    ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+	if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+	    ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
 		sfc_err(sa, "TCP and UDP offloads can't be set independently");
 		rc = EINVAL;
 	}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
 	int rc = 0;
 
 	switch (txmode->mq_mode) {
-	case ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_NONE:
 		break;
 	default:
 		sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -515,23 +515,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	if (rc != 0)
 		goto fail_ev_qstart;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_IPV4;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_INNER_IPV4;
 
-	if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-	    (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+	if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+	    (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 		flags |= EFX_TXQ_CKSUM_TCPUDP;
 
-		if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
 	}
 
-	if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		flags |= EFX_TXQ_FATSOV2;
 
 	rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -862,9 +862,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/*
 		 * Here VLAN TCI is expected to be zero in case if no
-		 * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
 		 * if the calling app ignores the absence of
-		 * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
 		 * TX_ERROR will occur
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1228,13 +1228,13 @@ struct sfc_dp_tx sfc_efx_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_TX_EFX,
 	},
 	.features		= 0,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO,
 	.qsize_up_rings		= sfc_efx_tx_qsize_up_rings,
 	.qcreate		= sfc_efx_tx_qcreate,
 	.qdestroy		= sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
 		return status;
 
 	/* Link UP */
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
 	struct pmd_internals *p = dev->data->dev_private;
 
 	/* Link DOWN */
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	/* Firmware */
 	softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
 
 	/* dev->data */
 	dev->data->dev_private = dev_private;
-	dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	dev->data->mac_addrs = &eth_addr;
 	dev->data->promiscuous = 1;
 	dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 7416a6b1b816..255444a4181d 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
 eth_dev_configure(struct rte_eth_dev *dev)
 {
 	struct rte_eth_dev_data *data = dev->data;
-	if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev->rx_pkt_burst = eth_szedata2_rx_scattered;
 		data->scattered_rx = 1;
 	} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_queues = internals->max_rx_queues;
 	dev_info->max_tx_queues = internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
 	dev_info->tx_offload_capa = 0;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->tx_queue_offload_capa = 0;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1204,10 +1204,10 @@ eth_link_update(struct rte_eth_dev *dev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_speed = ETH_SPEED_NUM_100G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_status = ETH_LINK_UP;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	rte_eth_linkstatus_set(dev, &link);
 	return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf71d..ad5980ef5280 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
 
 #define TAP_IOV_DEFAULT_MAX 1024
 
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER |	\
-			DEV_RX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_RX_OFFLOAD_UDP_CKSUM |	\
-			DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER |	\
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS |	\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_TX_OFFLOAD_UDP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |	\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 static int tap_devices_count;
 
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
 static volatile uint32_t tap_trigger;	/* Rx trigger */
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 		len = readv(process_private->rxq_fds[rxq->queue_id],
 			*rxq->iovecs,
-			1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+			1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
 			     rxq->nb_rx_desc : 1));
 		if (len < (int)sizeof(struct tun_pi))
 			break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		seg->next = NULL;
 		mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
 						      RTE_PTYPE_ALL_MASK);
-		if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 			tap_verify_csum(mbuf);
 
 		/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
 }
 
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
 }
 
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
 	uint32_t speed = pmd_link.link_speed;
 	uint32_t capa = 0;
 
-	if (speed >= ETH_SPEED_NUM_10M)
-		capa |= ETH_LINK_SPEED_10M;
-	if (speed >= ETH_SPEED_NUM_100M)
-		capa |= ETH_LINK_SPEED_100M;
-	if (speed >= ETH_SPEED_NUM_1G)
-		capa |= ETH_LINK_SPEED_1G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_2_5G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_5G;
-	if (speed >= ETH_SPEED_NUM_10G)
-		capa |= ETH_LINK_SPEED_10G;
-	if (speed >= ETH_SPEED_NUM_20G)
-		capa |= ETH_LINK_SPEED_20G;
-	if (speed >= ETH_SPEED_NUM_25G)
-		capa |= ETH_LINK_SPEED_25G;
-	if (speed >= ETH_SPEED_NUM_40G)
-		capa |= ETH_LINK_SPEED_40G;
-	if (speed >= ETH_SPEED_NUM_50G)
-		capa |= ETH_LINK_SPEED_50G;
-	if (speed >= ETH_SPEED_NUM_56G)
-		capa |= ETH_LINK_SPEED_56G;
-	if (speed >= ETH_SPEED_NUM_100G)
-		capa |= ETH_LINK_SPEED_100G;
+	if (speed >= RTE_ETH_SPEED_NUM_10M)
+		capa |= RTE_ETH_LINK_SPEED_10M;
+	if (speed >= RTE_ETH_SPEED_NUM_100M)
+		capa |= RTE_ETH_LINK_SPEED_100M;
+	if (speed >= RTE_ETH_SPEED_NUM_1G)
+		capa |= RTE_ETH_LINK_SPEED_1G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_2_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_10G)
+		capa |= RTE_ETH_LINK_SPEED_10G;
+	if (speed >= RTE_ETH_SPEED_NUM_20G)
+		capa |= RTE_ETH_LINK_SPEED_20G;
+	if (speed >= RTE_ETH_SPEED_NUM_25G)
+		capa |= RTE_ETH_LINK_SPEED_25G;
+	if (speed >= RTE_ETH_SPEED_NUM_40G)
+		capa |= RTE_ETH_LINK_SPEED_40G;
+	if (speed >= RTE_ETH_SPEED_NUM_50G)
+		capa |= RTE_ETH_LINK_SPEED_50G;
+	if (speed >= RTE_ETH_SPEED_NUM_56G)
+		capa |= RTE_ETH_LINK_SPEED_56G;
+	if (speed >= RTE_ETH_SPEED_NUM_100G)
+		capa |= RTE_ETH_LINK_SPEED_100G;
 
 	return capa;
 }
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 		tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
 		if (!(ifr.ifr_flags & IFF_UP) ||
 		    !(ifr.ifr_flags & IFF_RUNNING)) {
-			dev_link->link_status = ETH_LINK_DOWN;
+			dev_link->link_status = RTE_ETH_LINK_DOWN;
 			return 0;
 		}
 	}
 	tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
 	dev_link->link_status =
 		((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
-		 ETH_LINK_UP :
-		 ETH_LINK_DOWN);
+		 RTE_ETH_LINK_UP :
+		 RTE_ETH_LINK_DOWN);
 	return 0;
 }
 
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
 	int ret;
 
 	/* initialize GSO context */
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (!pmd->gso_ctx_mp) {
 		/*
 		 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->csum = !!(offloads &
-			(DEV_TX_OFFLOAD_IPV4_CKSUM |
-			 DEV_TX_OFFLOAD_UDP_CKSUM |
-			 DEV_TX_OFFLOAD_TCP_CKSUM));
+			(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
 
 	ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
 	if (ret == -1)
@@ -1765,7 +1765,7 @@ static int
 tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	fc_conf->mode = RTE_FC_NONE;
+	fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1773,7 +1773,7 @@ static int
 tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	if (fc_conf->mode != RTE_FC_NONE)
+	if (fc_conf->mode != RTE_ETH_FC_NONE)
 		return -ENOTSUP;
 	return 0;
 }
@@ -2267,7 +2267,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
 			}
 		}
 	}
-	pmd_link.link_speed = ETH_SPEED_NUM_10G;
+	pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 
 	TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
 
@@ -2441,7 +2441,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
 		return 0;
 	}
 
-	speed = ETH_SPEED_NUM_10G;
+	speed = RTE_ETH_SPEED_NUM_10G;
 
 	/* use tap%d which causes kernel to choose next available */
 	strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
diff --git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
 #define TAP_RSS_HASH_KEY_SIZE 40
 
 /* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
 
 /* hashed fields for RSS */
 enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fc1844ddfce1..26861e4103d9 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	if (nic->duplex == NICVF_HALF_DUPLEX)
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else if (nic->duplex == NICVF_FULL_DUPLEX)
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_speed = nic->speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* rte_eth_link_get() might need to wait up to 9 seconds */
 		for (i = 0; i < MAX_CHECK_TIME; i++) {
 			nicvf_link_status_update(nic, &link);
-			if (link.link_status == ETH_LINK_UP)
+			if (link.link_status == RTE_ETH_LINK_UP)
 				break;
 			rte_delay_ms(CHECK_INTERVAL);
 		}
@@ -177,9 +177,9 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 		return -EINVAL;
 
 	if (frame_size > NIC_HW_L2_MAX_LEN)
-		rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	if (nicvf_mbox_update_hw_max_frs(nic, mtu))
 		return -EINVAL;
@@ -404,35 +404,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
 	uint64_t nic_rss = 0;
 
-	if (ethdev_rss & ETH_RSS_IPV4)
+	if (ethdev_rss & RTE_ETH_RSS_IPV4)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_IPV6)
+	if (ethdev_rss & RTE_ETH_RSS_IPV6)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
-		if (ethdev_rss & ETH_RSS_VXLAN)
+		if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 			nic_rss |= RSS_TUN_VXLAN_ENA;
 
-		if (ethdev_rss & ETH_RSS_GENEVE)
+		if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 			nic_rss |= RSS_TUN_GENEVE_ENA;
 
-		if (ethdev_rss & ETH_RSS_NVGRE)
+		if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 			nic_rss |= RSS_TUN_NVGRE_ENA;
 	}
 
@@ -445,28 +445,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
 	uint64_t ethdev_rss = 0;
 
 	if (nic_rss & RSS_IP_ENA)
-		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+		ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
-				ETH_RSS_NONFRAG_IPV6_TCP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
-				ETH_RSS_NONFRAG_IPV6_UDP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP);
 
 	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
-		ethdev_rss |= ETH_RSS_PORT;
+		ethdev_rss |= RTE_ETH_RSS_PORT;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
 		if (nic_rss & RSS_TUN_VXLAN_ENA)
-			ethdev_rss |= ETH_RSS_VXLAN;
+			ethdev_rss |= RTE_ETH_RSS_VXLAN;
 
 		if (nic_rss & RSS_TUN_GENEVE_ENA)
-			ethdev_rss |= ETH_RSS_GENEVE;
+			ethdev_rss |= RTE_ETH_RSS_GENEVE;
 
 		if (nic_rss & RSS_TUN_NVGRE_ENA)
-			ethdev_rss |= ETH_RSS_NVGRE;
+			ethdev_rss |= RTE_ETH_RSS_NVGRE;
 	}
 	return ethdev_rss;
 }
@@ -821,9 +821,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
 		    dev->data->nb_rx_queues,
 		    dev->data->dev_conf.lpbk_mode, rsshf);
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		ret = nicvf_rss_term(nic);
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
 	if (ret)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -884,7 +884,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
 
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		txq = dev->data->tx_queues[i];
-		if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 			multiseg = true;
 			break;
 		}
@@ -1007,7 +1007,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->offloads = offloads;
 
-	is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+	is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
 
 	/* Choose optimum free threshold value for multipool case */
 	if (!is_single_pool) {
@@ -1397,11 +1397,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	PMD_INIT_FUNC_TRACE();
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-				 ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+				 RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
 	dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1430,10 +1430,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
-		.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
-			DEV_TX_OFFLOAD_UDP_CKSUM          |
-			DEV_TX_OFFLOAD_TCP_CKSUM,
+		.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM          |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
 	};
 
 	return 0;
@@ -1597,8 +1597,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
 		     nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
 
 	/* Configure VLAN Strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = nicvf_vlan_offload_config(dev, mask);
 
 	/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1727,11 +1727,11 @@ nicvf_dev_start(struct rte_eth_dev *dev)
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
 					    2 * VLAN_TAG_SIZE > buffsz)
 		dev->data->scattered_rx = 1;
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
 		dev->data->scattered_rx = 1;
 
 	/* Setup MTU based on max_rx_pkt_len or default */
-	mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
+	mtu = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME ?
 		dev->data->dev_conf.rxmode.max_rx_pkt_len
 			-  RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
 
@@ -1914,8 +1914,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!rte_eal_has_hugepages()) {
 		PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1927,8 +1927,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
@@ -1938,7 +1938,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -1973,7 +1973,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic->offload_cksum = 1;
 
 	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2050,8 +2050,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct nicvf *nic = nicvf_pmd_priv(dev);
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			nicvf_vlan_hw_strip(nic, true);
 		else
 			nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..c1876bb9e1b7 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,33 +16,33 @@
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
 #define NICVF_RSS_OFFLOAD_PASS1 ( \
-	ETH_RSS_PORT | \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_PORT | \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define NICVF_RSS_OFFLOAD_TUNNEL ( \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
 
 #define NICVF_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_IPV4_CKSUM       | \
-	DEV_TX_OFFLOAD_UDP_CKSUM        | \
-	DEV_TX_OFFLOAD_TCP_CKSUM        | \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM       | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define NICVF_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM    | \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_JUMBO_FRAME | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM    | \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 006399468841..a42b7bfe55ae 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -997,7 +997,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 	rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
 	rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
 	rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
-	if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		restart = (rxcfg & TXGBE_RXCFG_ENA) &&
 			!(rxcfg & TXGBE_RXCFG_VLAN);
 		rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1032,7 +1032,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
 	qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (vlan_ext) {
 			wr32m(hw, TXGBE_VLANCTL,
 				TXGBE_VLANCTL_TPID_MASK,
@@ -1052,7 +1052,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				TXGBE_TAGTPID_LSB(tpid));
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (vlan_ext) {
 			/* Only the high 16-bits is valid */
 			wr32m(hw, TXGBE_EXTAG,
@@ -1137,10 +1137,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -1239,7 +1239,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			txgbe_vlan_strip_queue_set(dev, i, 1);
 		else
 			txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1253,17 +1253,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct txgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -1274,25 +1274,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		txgbe_vlan_hw_strip_config(dev);
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			txgbe_vlan_hw_filter_enable(dev);
 		else
 			txgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			txgbe_vlan_hw_extend_enable(dev);
 		else
 			txgbe_vlan_hw_extend_disable(dev);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			txgbe_qinq_hw_strip_enable(dev);
 		else
 			txgbe_qinq_hw_strip_disable(dev);
@@ -1330,10 +1330,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -1356,18 +1356,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1377,13 +1377,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
 			dev->data->dev_conf.rxmode.mq_mode =
-				ETH_MQ_RX_VMDQ_ONLY;
+				RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -1392,13 +1392,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
 			dev->data->dev_conf.txmode.mq_mode =
-				ETH_MQ_TX_VMDQ_ONLY;
+				RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -1413,13 +1413,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1428,15 +1428,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1445,39 +1445,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -1494,8 +1494,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multiple queue mode checking */
 	ret  = txgbe_check_mq_mode(dev);
@@ -1637,7 +1637,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	 *    - half duplex (checked afterwards for valid speeds)
 	 *    - fixed speed: TODO implement
 	 */
-	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR,
 		"Invalid link_speeds for port %u, fix speed not supported",
 				dev->data->port_id);
@@ -1704,15 +1704,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		txgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1773,8 +1773,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	if (err)
 		goto error;
 
-	allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+	allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
 	if (*link_speeds & ~allowed_speeds) {
@@ -1783,20 +1783,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = (TXGBE_LINK_SPEED_100M_FULL |
 			 TXGBE_LINK_SPEED_1GB_FULL |
 			 TXGBE_LINK_SPEED_10GB_FULL);
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= TXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= TXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= TXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= TXGBE_LINK_SPEED_100M_FULL;
 	}
 
@@ -2611,7 +2611,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2644,11 +2644,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_desc_lim = tx_desc_lim;
 
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -2705,10 +2705,10 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	hw->mac.get_link_status = true;
 
@@ -2722,8 +2722,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -2742,34 +2742,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	}
 
 	intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case TXGBE_LINK_SPEED_UNKNOWN:
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case TXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -2994,7 +2994,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3225,13 +3225,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3363,10 +3363,10 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 		return -ENOTSUP;
 	}
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
@@ -3404,10 +3404,10 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
@@ -3593,12 +3593,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			wr32(hw, TXGBE_UCADDRTBL(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			wr32(hw, TXGBE_UCADDRTBL(i), 0);
 		}
@@ -3622,15 +3622,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= TXGBE_POOLETHCTL_UTA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= TXGBE_POOLETHCTL_MCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= TXGBE_POOLETHCTL_UCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= TXGBE_POOLETHCTL_BCA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= TXGBE_POOLETHCTL_MCP;
 
 	return new_val;
@@ -4281,15 +4281,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = TXGBE_INCVAL_100;
 		shift = TXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = TXGBE_INCVAL_1GB;
 		shift = TXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = TXGBE_INCVAL_10GB;
 		shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4645,7 +4645,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -4656,7 +4656,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled */
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -4680,9 +4680,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled */
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4695,7 +4695,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4925,7 +4925,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -4956,7 +4956,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -4996,7 +4996,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5004,7 +5004,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5012,7 +5012,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5020,7 +5020,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5052,7 +5052,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5062,7 +5062,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5072,7 +5072,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5082,7 +5082,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965c8..75a9e2580e27 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -56,15 +56,15 @@
 #define TXGBE_5TUPLE_MIN_PRI            1
 
 #define TXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define TXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define TXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 18ed94bd277b..05773cb20786 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -491,14 +491,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
 	dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
 	dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -579,22 +579,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -652,8 +652,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
 	txgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -896,10 +896,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			txgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
 		     uint32_t *fdirctrl, uint32_t *flex)
 {
 	*fdirctrl = 0;
 	*flex = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
 		break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_signature_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= TXGBE_SECRXCTL_CRCSTRIP;
 	wr32(hw, TXGBE_SECRXCTL, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
 		reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
 		if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
 		reg = rd32(hw, TXGBE_SECTXCTL);
 		if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 494d779a3c9d..44f6f103edd2 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -103,15 +103,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct txgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -258,13 +258,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	}
@@ -613,29 +613,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &eth_dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = TXGBE_DEV_HW(eth_dev);
 		vmvir = rd32(hw, TXGBE_POOLTAG(vf));
 		vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1bd..c302d49af728 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1939,7 +1939,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
 uint64_t
 txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
 {
-	return DEV_RX_OFFLOAD_VLAN_STRIP;
+	return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 }
 
 uint64_t
@@ -1949,35 +1949,35 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_JUMBO_FRAME |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_RSS_HASH |
-		   DEV_RX_OFFLOAD_SCATTER;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		   RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	if (!txgbe_is_vf(dev))
-		offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
-			     DEV_RX_OFFLOAD_QINQ_STRIP |
-			     DEV_RX_OFFLOAD_VLAN_EXTEND);
+		offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 
 	/*
 	 * RSC is only supported by PF devices in a non-SR-IOV
 	 * mode.
 	 */
 	if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == txgbe_mac_raptor)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
-	offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -2202,32 +2202,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_UDP_TSO	   |
-		DEV_TX_OFFLOAD_UDP_TNL_TSO	|
-		DEV_TX_OFFLOAD_IP_TNL_TSO	|
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO	|
-		DEV_TX_OFFLOAD_GRE_TNL_TSO	|
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO	|
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO	|
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_UDP_TSO	   |
+		RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (!txgbe_is_vf(dev))
-		tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2329,7 +2329,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/* Modification to set tail pointer for virtual function
@@ -2579,7 +2579,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2880,20 +2880,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2910,20 +2910,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		mrqc &= ~TXGBE_RACTL_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_RACTL_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_RACTL_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_RACTL_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2964,39 +2964,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
 			rss_hf = 0;
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		if (mrqc & TXGBE_RACTL_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_RACTL_RSSENA))
 			rss_hf = 0;
 	}
@@ -3026,7 +3026,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
 	 */
 	if (adapter->rss_reta_updated == 0) {
 		reta = 0;
-		for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+		for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 			if (j == dev->data->nb_rx_queues)
 				j = 0;
 			reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3063,12 +3063,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		txgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * split rx buffer up into sections, each for 1 traffic class
@@ -3083,7 +3083,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
 
 		rxpbsize &= (~(0x3FF << 10));
@@ -3091,7 +3091,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 
-	if (num_pools == ETH_16_POOLS) {
+	if (num_pools == RTE_ETH_16_POOLS) {
 		mrqc = TXGBE_PORTCTL_NUMTC_8;
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 	} else {
@@ -3110,7 +3110,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	wr32(hw, TXGBE_POOLCTL, vt_ctl);
 
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3131,7 +3131,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_POOLRXENA(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_ETHADDRIDX, 0);
 	wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3201,7 +3201,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	/*PF VF Transmit Enable*/
 	wr32(hw, TXGBE_POOLTXENA(0),
 		vmdq_tx_conf->nb_queue_pools ==
-				ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+				RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3217,12 +3217,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3232,7 +3232,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3250,12 +3250,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3265,7 +3265,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3292,7 +3292,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3319,7 +3319,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3455,7 +3455,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/*
@@ -3466,8 +3466,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		/*Configure general VMDQ and DCB RX parameters*/
 		txgbe_vmdq_dcb_configure(dev);
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3480,7 +3480,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -3491,7 +3491,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3507,15 +3507,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
-			if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -3556,7 +3556,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			wr32(hw, TXGBE_PBRXSIZE(i), 0);
 	}
 	if (config_dcb_tx) {
@@ -3572,7 +3572,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			wr32(hw, TXGBE_PBTXSIZE(i), 0);
 			wr32(hw, TXGBE_PBTXDMATH(i), 0);
 		}
@@ -3614,7 +3614,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/* If the TC count is 8,
@@ -3628,7 +3628,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = txgbe_dcb_pfc_enabled;
 		}
 		txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -3699,12 +3699,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -3760,7 +3760,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* pool enabling for receive - 64 */
 	wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
 
 	/*
@@ -3884,11 +3884,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
@@ -3911,15 +3911,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	default:
@@ -3942,21 +3942,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			txgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			txgbe_rss_disable(dev);
@@ -3967,18 +3967,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4008,7 +4008,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			txgbe_vmdq_tx_hw_configure(hw);
 		else
 			wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4018,13 +4018,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_64;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_32;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_16;
 			break;
 		default:
@@ -4087,10 +4087,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4098,22 +4098,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
 				    "is disabled");
 		return -EINVAL;
 	}
 
 	rfctl = rd32(hw, TXGBE_PSRCTL);
-	if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~TXGBE_PSRCTL_RSCDIA;
 	else
 		rfctl |= TXGBE_PSRCTL_RSCDIA;
 	wr32(hw, TXGBE_PSRCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set PSRCTL.RSCACK bit */
@@ -4253,7 +4253,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
 		struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
 
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 	}
 #endif
 }
@@ -4296,7 +4296,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = rd32(hw, TXGBE_SECRXCTL);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
 	else
 		hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4305,7 +4305,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	/*
 	 * Configure jumbo frame support, if any.
 	 */
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
 			TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
 	} else {
@@ -4329,7 +4329,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4339,7 +4339,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -4376,11 +4376,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
 					    2 * TXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -4395,7 +4395,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = rd32(hw, TXGBE_PSRCTL);
 	rxcsum |= TXGBE_PSRCTL_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= TXGBE_PSRCTL_L4CSUM;
 	else
 		rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4404,7 +4404,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 
 	if (hw->mac.type == txgbe_mac_raptor) {
 		rdrxctl = rd32(hw, TXGBE_SECRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4527,8 +4527,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 		txgbe_setup_loopback_link_raptor(hw);
 
 #ifdef RTE_LIB_SECURITY
-	if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
-	    (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+	    (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = txgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -4836,7 +4836,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Set PSR type for VF RSS according to max Rx queue */
 	psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4888,7 +4888,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		 */
 		wr32(hw, TXGBE_RXCFG(i), srrctl);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (rxmode->max_rx_pkt_len +
 				2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4897,8 +4897,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/*
@@ -5069,7 +5069,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
 	 * little-endian order.
 	 */
 	reta = 0;
-	for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+	for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 		if (j == conf->conf.queue_num)
 			j = 0;
 		reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
 	uint8_t             rx_deferred_start; /**< not in global dev start. */
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t            offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t            offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9aed..778460aab5e1 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
 
 static struct rte_eth_link pmd_link = {
 		.link_speed = 10000,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN
 };
 
 struct rte_vhost_vring_state {
@@ -823,7 +823,7 @@ new_device(int vid)
 
 	rte_vhost_get_mtu(vid, &eth_dev->data->mtu);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_atomic32_set(&internal->dev_attached, 1);
 	update_queuing_status(eth_dev);
@@ -858,7 +858,7 @@ destroy_device(int vid)
 	rte_atomic32_set(&internal->dev_attached, 0);
 	update_queuing_status(eth_dev);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
 		for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1124,7 +1124,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
 	if (vhost_driver_setup(dev) < 0)
 		return -1;
 
-	internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -1273,9 +1273,9 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_tx_queues = internal->max_queues;
 	dev_info->min_rx_bufsize = 0;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e58085a2c95a..00bbbb2b3537 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -703,7 +703,7 @@ int
 virtio_dev_close(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "virtio_dev_close");
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1763,7 +1763,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		     hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
 		     hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
 
-	if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+	if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
 		if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
 			config = &local_config;
 			virtio_read_dev_config(hw,
@@ -1777,7 +1777,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		}
 	}
 	if (hw->duplex == DUPLEX_UNKNOWN)
-		hw->duplex = ETH_LINK_FULL_DUPLEX;
+		hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
 		hw->speed, hw->duplex);
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1876,7 +1876,7 @@ int
 eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct virtio_hw *hw = eth_dev->data->dev_private;
-	uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+	uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	int vectorized = 0;
 	int ret;
 
@@ -1948,22 +1948,22 @@ static uint32_t
 virtio_dev_speed_capa_get(uint32_t speed)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -2079,14 +2079,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "configure");
 	req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Rx multi queue mode %d",
 			rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Tx multi queue mode %d",
 			txmode->mq_mode);
@@ -2104,20 +2104,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 
 	hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			   DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 			(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_CSUM);
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_HOST_TSO4) |
 			(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2129,15 +2129,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 			return ret;
 	}
 
-	if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			    DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+	if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			    RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
 		!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
 		PMD_DRV_LOG(ERR,
 			"rx checksum not available on this host");
 		return -ENOTSUP;
 	}
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
 		(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
 		 !virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
 		PMD_DRV_LOG(ERR,
@@ -2149,12 +2149,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
 		virtio_dev_cq_start(dev);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		hw->vlan_strip = 1;
 
-	hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+	hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 			!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 		PMD_DRV_LOG(ERR,
 			    "vlan filtering not available on this host");
@@ -2207,7 +2207,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 				PMD_DRV_LOG(INFO,
 					"disabled packed ring vectorized rx for TCP_LRO enabled");
 				hw->use_vec_rx = 0;
@@ -2234,10 +2234,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_LRO |
-					   DEV_RX_OFFLOAD_VLAN_STRIP)) {
+			if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_LRO |
+					   RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
 				PMD_DRV_LOG(INFO,
 					"disabled split ring vectorized rx for offloading enabled");
 				hw->use_vec_rx = 0;
@@ -2401,7 +2401,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
 	struct rte_eth_link link;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "stop");
 	dev->data->dev_started = 0;
@@ -2440,28 +2440,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
 	memset(&link, 0, sizeof(link));
 	link.link_duplex = hw->duplex;
 	link.link_speed  = hw->speed;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (!hw->started) {
-		link.link_status = ETH_LINK_DOWN;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
 		PMD_INIT_LOG(DEBUG, "Get link status from hw");
 		virtio_read_dev_config(hw,
 				offsetof(struct virtio_net_config, status),
 				&status, sizeof(status));
 		if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
-			link.link_status = ETH_LINK_DOWN;
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_status = RTE_ETH_LINK_DOWN;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			PMD_INIT_LOG(DEBUG, "Port %d is down",
 				     dev->data->port_id);
 		} else {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			PMD_INIT_LOG(DEBUG, "Port %d is up",
 				     dev->data->port_id);
 		}
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2474,8 +2474,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct virtio_hw *hw = dev->data->dev_private;
 	uint64_t offloads = rxmode->offloads;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 				!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 
 			PMD_DRV_LOG(NOTICE,
@@ -2485,8 +2485,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK)
-		hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
+		hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -2508,33 +2508,33 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = hw->max_mtu;
 
 	host_features = VIRTIO_OPS(hw)->get_features(hw);
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
-	dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM;
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 	}
 	if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 		(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 	}
 	tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
 		(1ULL << VIRTIO_NET_F_HOST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	return 0;
 }
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 1a3291273a11..825a6adfc2b1 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,21 +41,21 @@
 #define	VMXNET3_TX_MAX_SEG	UINT8_MAX
 
 #define VMXNET3_TX_OFFLOAD_CAP		\
-	(DEV_TX_OFFLOAD_VLAN_INSERT |	\
-	 DEV_TX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_TX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_TX_OFFLOAD_TCP_TSO |	\
-	 DEV_TX_OFFLOAD_MULTI_SEGS)
+	(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+	 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define VMXNET3_RX_OFFLOAD_CAP		\
-	(DEV_RX_OFFLOAD_VLAN_STRIP |	\
-	 DEV_RX_OFFLOAD_VLAN_FILTER |   \
-	 DEV_RX_OFFLOAD_SCATTER |	\
-	 DEV_RX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_LRO |	\
-	 DEV_RX_OFFLOAD_JUMBO_FRAME |   \
-	 DEV_RX_OFFLOAD_RSS_HASH)
+	(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |	\
+	 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |   \
+	 RTE_ETH_RX_OFFLOAD_SCATTER |	\
+	 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_LRO |	\
+	 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |   \
+	 RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 int vmxnet3_segs_dynfield_offset = -1;
 
@@ -399,9 +399,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* set the initial link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(eth_dev, &link);
 
 	return 0;
@@ -487,8 +487,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
 	    dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -548,7 +548,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 	hw->queueDescPA = mz->iova;
 	hw->queue_desc_len = (uint16_t)size;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Allocate memory structure for UPT1_RSSConf and configure */
 		mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
 				      "rss_conf", rte_socket_id(),
@@ -844,15 +844,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	devRead->rxFilterConf.rxMode = 0;
 
 	/* Setting up feature flags */
-	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		devRead->misc.uptFeatures |= VMXNET3_F_LRO;
 		devRead->misc.maxNumRxSG = 0;
 	}
 
-	if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		ret = vmxnet3_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS)
 			return ret;
@@ -864,7 +864,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	}
 
 	ret = vmxnet3_dev_vlan_offload_set(dev,
-			ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+			RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -931,7 +931,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
 	}
 
 	if (VMXNET3_VERSION_GE_4(hw) &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Check for additional RSS  */
 		ret = vmxnet3_v4_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS) {
@@ -1040,9 +1040,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
 
 	/* Clear recorded link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(dev, &link);
 
 	hw->adapter_stopped = 1;
@@ -1372,7 +1372,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
 	dev_info->min_mtu = VMXNET3_MIN_MTU;
 	dev_info->max_mtu = VMXNET3_MAX_MTU;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
 
 	dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1454,10 +1454,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
 	ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
 
 	if (ret & 0x1)
-		link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+		link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -1510,7 +1510,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 	else
 		memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1580,8 +1580,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint32_t *vf_table = devRead->rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
 		else
 			devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1590,8 +1590,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 				       VMXNET3_CMD_UPDATE_FEATURE);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 		else
 			memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 59bee9723cfc..7588ba929b65 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
 				VMXNET3_MAX_RX_QUEUES + 1)
 
 #define VMXNET3_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 #define VMXNET3_V4_RSS_MASK ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define VMXNET3_MANDATORY_V4_RSS ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 /* RSS configuration structure - shared with device through GPA */
 typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 5cf53d4de825..0f2671f528f4 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
 	rss_hf = port_rss_conf->rss_hf &
 		(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
 
 	VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
 	/* loading hashType */
 	dev_rss_conf->hashType = 0;
 	rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
 
 	return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index 5251db0b1674..ecc6ef2965ee 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -71,12 +71,12 @@ mbuf_input(struct rte_mbuf *mbuf)
 
 static const struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -334,7 +334,7 @@ check_port_link_status(uint16_t port_id)
 
 		if (link_get_err >= 0 && link.link_status) {
 			const char *dp = (link.link_duplex ==
-				ETH_LINK_FULL_DUPLEX) ?
+				RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex";
 			printf("\nPort %u Link Up - speed %s - %s\n",
 				port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index f48400e21156..e4c627e203a4 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -116,18 +116,18 @@ static struct rte_mempool *mbuf_pool;
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -151,9 +151,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
@@ -243,9 +243,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			BOND_PORT, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
 	if (retval != 0)
 		rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 1b1029660e77..e6af8420e4c6 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,16 +80,16 @@ struct app_stats prev_app_stats;
 
 static const struct rte_eth_conf port_conf_default = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		}
 	},
 };
@@ -127,9 +127,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 21ed85c7d6c9..5053d174335c 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
 	int ret;
 
 	memset(&cfg_port, 0, sizeof(cfg_port));
-	cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+	cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
 
 	for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
 		struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
 	pause_param->tx_pause = 0;
 	pause_param->rx_pause = 0;
 	switch (fc_conf.mode) {
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		pause_param->rx_pause = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		pause_param->tx_pause = 1;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_param->rx_pause = 1;
 		pause_param->tx_pause = 1;
 	default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
 
 	if (pause_param->tx_pause) {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_FULL;
+			fc_conf.mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf.mode = RTE_FC_TX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
 	} else {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_RX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
 		else
-			fc_conf.mode = RTE_FC_NONE;
+			fc_conf.mode = RTE_ETH_FC_NONE;
 	}
 
 	status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
 	for (vf = 0; vf < num_vfs; vf++) {
 #ifdef RTE_NET_IXGBE
 		rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
-			ETH_VMDQ_ACCEPT_UNTAG, 0);
+			RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
 #endif
 	}
 
 	/* Enable Rx vlan filter, VF unspport status is discard */
-	ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+	ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
 	if (ret != 0)
 		return ret;
 
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index f70ab0cc9e38..3ac98add5692 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,14 +283,14 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -312,12 +312,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ca6cd200caad..5780928d75ee 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,14 +614,14 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -643,9 +643,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
 
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index db71f5aa0401..f44ee65372ff 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -218,9 +218,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index 29fb4b3d55ef..150406e385d4 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
 	memset(&link, 0, sizeof(link));
 	do {
 		link_get_err = rte_eth_link_get(port_id, &link);
-		if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+		if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
 			break;
 		rte_delay_ms(CHECK_INTERVAL);
 	} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
 	if (link_get_err < 0)
 		rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
 			 rte_strerror(-link_get_err));
-	if (link.link_status == ETH_LINK_DOWN)
+	if (link.link_status == RTE_ETH_LINK_DOWN)
 		rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
 }
 
@@ -138,12 +138,12 @@ init_port(void)
 		},
 		.txmode = {
 			.offloads =
-				DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO,
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO,
 		},
 	};
 	struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 0c413180f889..94e3ac91b299 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,13 +819,13 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 	/* Configuring port to use RSS for multiple RX queues. 8< */
 	static const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.max_rx_pkt_len = RTE_ETHER_MAX_LEN
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_PROTO_MASK,
+				.rss_hf = RTE_ETH_RSS_PROTO_MASK,
 			}
 		}
 	};
@@ -853,9 +853,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f24536972084..aa41fcc1d037 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -148,14 +148,14 @@ static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
 		.split_hdr_size = 0,
-		.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
-			     DEV_RX_OFFLOAD_SCATTER |
-			     DEV_RX_OFFLOAD_JUMBO_FRAME),
+		.offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+			     RTE_ETH_RX_OFFLOAD_SCATTER |
+			     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME),
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -624,7 +624,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..8e974a8d0a92 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -267,5 +267,5 @@ link_is_up(const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790be4..8aabea002bbb 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -161,22 +161,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
 		.split_hdr_size = 0,
-		.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
-			     DEV_RX_OFFLOAD_JUMBO_FRAME),
+		.offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+			     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME),
 	},
 	.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -740,7 +740,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -1097,9 +1097,9 @@ main(int argc, char **argv)
 		n_tx_queue = nb_lcores;
 		if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
 			n_tx_queue = MAX_TX_QUEUE_PER_PORT;
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f252d34985b4..73932564e459 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -234,20 +234,20 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1456,10 +1456,10 @@ print_usage(const char *prgname)
 		"               \"parallel\" : Parallel\n"
 		"  --" CMD_LINE_OPT_RX_OFFLOAD
 		": bitmask of the RX HW offload capabilities to enable/use\n"
-		"                         (DEV_RX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_RX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_TX_OFFLOAD
 		": bitmask of the TX HW offload capabilities to enable/use\n"
-		"                         (DEV_TX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_TX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_REASSEMBLE " NUM"
 		": max number of entries in reassemble(fragment) table\n"
 		"    (zero (default value) disables reassembly)\n"
@@ -1908,7 +1908,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2211,12 +2211,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 
 	frame_size = MTU_TO_FRAMELEN(mtu_size);
 	if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
-		local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	local_port_conf.rxmode.max_rx_pkt_len = frame_size;
 
 	if (multi_seg_required()) {
-		local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2239,12 +2239,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 			portid, local_port_conf.txmode.offloads,
 			dev_info.tx_offload_capa);
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	printf("port %u configurng rx_offloads=0x%" PRIx64
 		", tx_offloads=0x%" PRIx64 "\n",
@@ -2302,7 +2302,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		/* Pre-populate pkt offloads based on capabilities */
 		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
 		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
-		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+		if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
 
 		tx_queueid++;
@@ -2663,7 +2663,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
 	struct rte_flow *flow;
 	int ret;
 
-	if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	if (inbound) {
 		if ((dev_info.rx_offload_capa &
-				DEV_RX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware RX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	} else { /* outbound */
 		if ((dev_info.tx_offload_capa &
-				DEV_TX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware TX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+			*rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 	}
 
 	/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+			*tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
 	}
 	return 0;
 }
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b38..96fb325ff180 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -112,11 +112,11 @@ static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
+		.offloads = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	},
 };
 
@@ -620,7 +620,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index beabb3c848aa..81124dc0dc88 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -95,7 +95,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
 /* Options for configuring ethernet port */
 static struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -608,9 +608,9 @@ init_port(uint16_t port)
 			"Error during getting device (port %u) info: %s\n",
 			port, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -688,7 +688,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -792,9 +792,9 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
 	memcpy(&conf, &port_conf, sizeof(conf));
 	/* Set new MTU */
 	if (new_mtu > RTE_ETHER_MAX_LEN)
-		conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* mtu + length of header + length of FCS = max pkt length */
 	conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 5f539c458cdd..89489843e2bd 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -216,12 +216,12 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1809,7 +1809,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2633,9 +2633,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
 			return retval;
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (retval < 0) {
 			printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index b8c1e02d7598..80a72f7095cf 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -15,7 +15,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 			.split_hdr_size = 0,
 		},
 		.txmode = {
-			.mq_mode = ETH_MQ_TX_NONE,
+			.mq_mode = RTE_ETH_MQ_TX_NONE,
 		},
 	};
 	uint16_t nb_ports_available = 0;
@@ -23,9 +23,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 	int ret;
 
 	if (rsrc->event_mode) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+		port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	}
 
 	/* Initialise each port */
@@ -61,9 +61,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 				local_port_conf.rx_adv_conf.rss_conf.rss_hf);
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queue. 8< */
 		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index bbb4a27a6d54..2e50339afb61 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -726,7 +726,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -869,9 +869,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index 4e1a17cfe4f5..d228a842788d 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -478,7 +478,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -650,9 +650,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
 			rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 911e40c66e0e..b4a69dde63dc 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -95,7 +95,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -606,7 +606,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -792,9 +792,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the number of queues for a port. */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564b6..9323426e9b1d 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -124,20 +124,20 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1815,9 +1815,9 @@ parse_args(int argc, char **argv)
 
 			printf("jumbo frame is enabled\n");
 			port_conf.rxmode.offloads |=
-					DEV_RX_OFFLOAD_JUMBO_FRAME;
+					RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			port_conf.txmode.offloads |=
-					DEV_TX_OFFLOAD_MULTI_SEGS;
+					RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 			/*
 			 * if no max-pkt-len set, then use the
@@ -1970,7 +1970,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2080,9 +2080,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index a0de8ca9b42d..278fe95970f3 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,18 +111,18 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -494,8 +494,8 @@ parse_args(int argc, char **argv)
 			const struct option lenopts = {"max-pkt-len",
 						       required_argument, 0, 0};
 
-			port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-			port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+			port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+			port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 			/*
 			 * if no max-pkt-len set, use the default
@@ -628,7 +628,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* Clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -807,9 +807,9 @@ main(int argc, char **argv)
 		       nb_rx_queue, n_tx_queue);
 
 		rte_eth_dev_info_get(portid, &dev_info);
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44ae8..85609e9d4593 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -250,19 +250,19 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_UDP,
+			.rss_hf = RTE_ETH_RSS_UDP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	}
 };
 
@@ -1961,9 +1961,9 @@ parse_args(int argc, char **argv)
 
 				printf("jumbo frame is enabled \n");
 				port_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 				port_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MULTI_SEGS;
+						RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 				/**
 				 * if no max-pkt-len set, use the default value
@@ -2222,7 +2222,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2622,9 +2622,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
 			rte_panic("Error during getting device (port %u) info:"
 				  "%s\n", port_id, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+						RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 						dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 00ac267af1dd..500444565463 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -120,19 +120,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -703,8 +703,8 @@ parse_args(int argc, char **argv)
 				"max-pkt-len", required_argument, 0, 0
 			};
 
-			port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-			port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+			port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+			port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 			/*
 			 * if no max-pkt-len set, use the default
@@ -926,7 +926,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -1035,15 +1035,15 @@ l3fwd_poll_resource_setup(void)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
 
 		if (dev_info.max_rx_queues == 1)
-			local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+			local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 
 		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
 				port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index 7470aa539a90..7c1214512983 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.intr_conf = {
 		.lsc = 1, /**< lsc interrupt feature enabled */
@@ -147,7 +147,7 @@ print_stats(void)
 			   link_get_err < 0 ? "0" :
 			   rte_eth_link_speed_to_str(link.link_speed),
 			   link_get_err < 0 ? "Link get failed" :
-			   (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+			   (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ? \
 					"full-duplex" : "half-duplex"),
 			   port_statistics[portid].tx,
 			   port_statistics[portid].rx,
@@ -507,7 +507,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -634,9 +634,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index 1ad71ca7ec5f..23307073c904 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -94,7 +94,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS
+			.mq_mode = RTE_ETH_MQ_RX_RSS
 		}
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -213,7 +213,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index 01dc3acf34d5..85955375f1bf 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -176,18 +176,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 {
 	struct rte_eth_conf port_conf = {
 			.rxmode = {
-				.mq_mode	= ETH_MQ_RX_RSS,
+				.mq_mode	= RTE_ETH_MQ_RX_RSS,
 				.split_hdr_size = 0,
-				.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+				.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 			},
 			.rx_adv_conf = {
 				.rss_conf = {
 					.rss_key = NULL,
-					.rss_hf = ETH_RSS_IP,
+					.rss_hf = RTE_ETH_RSS_IP,
 				},
 			},
 			.txmode = {
-				.mq_mode = ETH_MQ_TX_NONE,
+				.mq_mode = RTE_ETH_MQ_TX_NONE,
 			}
 	};
 	const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -218,9 +218,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 
 	info.default_rxconf.rx_drop_en = 1;
 
-	if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -392,7 +392,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
 
 static struct rte_eth_conf eth_port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index d2fe9f6b50d8..eb15899c902f 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
 		return ret;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
 	if (ret != 0)
 		return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf263d..86671655b432 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -307,19 +307,19 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_TCP,
+			.rss_hf = RTE_ETH_RSS_TCP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -2988,9 +2988,9 @@ parse_args(int argc, char **argv)
 
 			printf("jumbo frame is enabled - disabling simple TX path\n");
 			port_conf.rxmode.offloads |=
-					DEV_RX_OFFLOAD_JUMBO_FRAME;
+					RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			port_conf.txmode.offloads |=
-					DEV_TX_OFFLOAD_MULTI_SEGS;
+					RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 			/* if no max-pkt-len set, use the default value
 			 * RTE_ETHER_MAX_LEN
@@ -3466,7 +3466,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3577,9 +3577,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..2e68a3870a09 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
 
 struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fbf7..db32b0d6c427 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -197,14 +197,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Force full Tx path in the driver, required for IEEE1588 */
-	port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369dc..5ef14c176b11 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,19 +51,19 @@ static struct rte_mempool *pool = NULL;
  ***/
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -333,8 +333,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_rx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -379,8 +379,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_tx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..e750928fb89d 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -61,7 +61,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -106,9 +106,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6f20f98b2b30..08df716dc0fb 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -145,17 +145,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (hw_timestamping) {
-		if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+		if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 			printf("\nERROR: Port %u does not support hardware timestamping\n"
 					, port);
 			return -1;
 		}
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
 		if (hwts_dynfield_offset < 0) {
 			printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index 9ebd88bac20e..074fee5b26b2 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -96,7 +96,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -115,9 +115,9 @@ init_port(uint16_t port_num)
 	if (retval != 0)
 		return retval;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/*
 	 * Standard DPDK port initialisation - config port, then set up
@@ -277,7 +277,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index ae08261befd7..737df4ca2a17 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -55,9 +55,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index bc3d71c8984e..b1d363ae21db 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -109,23 +109,23 @@ static int nb_sockets;
 /* empty vmdq configuration structure. Filled in programatically */
 static struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 		/*
 		 * VLAN strip is necessary for 1G NIC such as I350,
 		 * this fixes bug of ipv4 forwarding in guest can't
 		 * forward pakets from one virtio dev to another virtio dev.
 		 */
-		.offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+		.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM |
-			     DEV_TX_OFFLOAD_VLAN_INSERT |
-			     DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_TCP_TSO),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO),
 	},
 	.rx_adv_conf = {
 		/*
@@ -133,7 +133,7 @@ static struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -290,9 +290,9 @@ port_init(uint16_t port)
 		return -1;
 
 	rx_rings = (uint16_t)dev_info.max_rx_queues;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Configure ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
 	if (retval != 0) {
@@ -562,8 +562,8 @@ us_vhost_parse_args(int argc, char **argv)
 		case 'P':
 			promiscuous = 1;
 			vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
-				ETH_VMDQ_ACCEPT_BROADCAST |
-				ETH_VMDQ_ACCEPT_MULTICAST;
+				RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+				RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 			break;
 
 		case OPT_VM2VM_NUM:
@@ -638,7 +638,7 @@ us_vhost_parse_args(int argc, char **argv)
 			mergeable = !!ret;
 			if (ret) {
 				vmdq_conf_default.rxmode.offloads |=
-					DEV_RX_OFFLOAD_JUMBO_FRAME;
+					RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 				vmdq_conf_default.rxmode.max_rx_pkt_len
 					= JUMBO_FRAME_MAX_SIZE;
 			}
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index 7d5bf6855426..dddcde40efe2 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -78,9 +78,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -278,7 +278,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 		       /* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index d3bc19f78ee5..16782a5d850f 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -66,12 +66,12 @@ static uint8_t rss_enable;
 /* empty vmdq configuration structure. Filled in programatically */
 static const struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		/*
@@ -79,7 +79,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -157,11 +157,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
 	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_rx_conf, &conf,
 		   sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -259,9 +259,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
 	if (retval != 0)
 		return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 685a03bdd194..f58625a76227 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -60,8 +60,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
 static unsigned num_ports;
 
 /* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs   num_tcs   = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs   num_tcs   = RTE_ETH_4_TCS;
 static uint16_t num_queues, num_vmdq_queues;
 static uint16_t vmdq_pool_base, vmdq_queue_base;
 static uint8_t rss_enable;
@@ -69,11 +69,11 @@ static uint8_t rss_enable;
 /* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
 static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_DCB,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_DCB,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_VMDQ_DCB,
+		.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
 	},
 	/*
 	 * should be overridden separately in code with
@@ -81,7 +81,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	 */
 	.rx_adv_conf = {
 		.vmdq_dcb_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -89,12 +89,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 			.dcb_tc = {0},
 		},
 		.dcb_rx_conf = {
-				.nb_tcs = ETH_4_TCS,
+				.nb_tcs = RTE_ETH_4_TCS,
 				/** Traffic class each UP mapped to. */
 				.dcb_tc = {0},
 		},
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -103,7 +103,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	},
 	.tx_adv_conf = {
 		.vmdq_dcb_tx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.dcb_tc = {0},
 		},
 	},
@@ -157,7 +157,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 		conf.pool_map[i].pools = 1UL << i;
 		vmdq_conf.pool_map[i].pools = 1UL << i;
 	}
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++){
 		conf.dcb_tc[i] = i % num_tcs;
 		dcb_conf.dcb_tc[i] = i % num_tcs;
 		tx_conf.dcb_tc[i] = i % num_tcs;
@@ -173,11 +173,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 	(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
 			  sizeof(tx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -271,9 +271,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -390,9 +390,9 @@ vmdq_parse_num_pools(const char *q_arg)
 	if (n != 16 && n != 32)
 		return -1;
 	if (n == 16)
-		num_pools = ETH_16_POOLS;
+		num_pools = RTE_ETH_16_POOLS;
 	else
-		num_pools = ETH_32_POOLS;
+		num_pools = RTE_ETH_32_POOLS;
 
 	return 0;
 }
@@ -412,9 +412,9 @@ vmdq_parse_num_tcs(const char *q_arg)
 	if (n != 4 && n != 8)
 		return -1;
 	if (n == 4)
-		num_tcs = ETH_4_TCS;
+		num_tcs = RTE_ETH_4_TCS;
 	else
-		num_tcs = ETH_8_TCS;
+		num_tcs = RTE_ETH_8_TCS;
 
 	return 0;
 }
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1b5..2be877d048cf 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -98,9 +98,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
 #define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
 
 #define RTE_RX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name)	\
 	{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
 
 static const struct {
@@ -126,14 +123,14 @@ static const struct {
 	RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
-	RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+	RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
 };
 
 #undef RTE_RX_OFFLOAD_BIT2STR
 #undef RTE_ETH_RX_OFFLOAD_BIT2STR
 
 #define RTE_TX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_TX_OFFLOAD_##_name, #_name }
+	{ RTE_ETH_TX_OFFLOAD_##_name, #_name }
 
 static const struct {
 	uint64_t offload;
@@ -1184,32 +1181,32 @@ uint32_t
 rte_eth_speed_bitflag(uint32_t speed, int duplex)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
-		return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
-	case ETH_SPEED_NUM_100M:
-		return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
-	case ETH_SPEED_NUM_1G:
-		return ETH_LINK_SPEED_1G;
-	case ETH_SPEED_NUM_2_5G:
-		return ETH_LINK_SPEED_2_5G;
-	case ETH_SPEED_NUM_5G:
-		return ETH_LINK_SPEED_5G;
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10M:
+		return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+	case RTE_ETH_SPEED_NUM_100M:
+		return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+	case RTE_ETH_SPEED_NUM_1G:
+		return RTE_ETH_LINK_SPEED_1G;
+	case RTE_ETH_SPEED_NUM_2_5G:
+		return RTE_ETH_LINK_SPEED_2_5G;
+	case RTE_ETH_SPEED_NUM_5G:
+		return RTE_ETH_LINK_SPEED_5G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -1458,7 +1455,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If jumbo frames are enabled, check that the maximum RX packet
 	 * length is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
 			RTE_ETHDEV_LOG(ERR,
 				"Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
@@ -1491,7 +1488,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If LRO is enabled, check that the maximum aggregated packet
 	 * size is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (dev_conf->rxmode.max_lro_pkt_size == 0)
 			dev->data->dev_conf.rxmode.max_lro_pkt_size =
 				dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -1543,12 +1540,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	}
 
 	/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
-	if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
-	    (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+	    (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		RTE_ETHDEV_LOG(ERR,
 			"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
 			port_id,
-			rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+			rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
 		ret = -EINVAL;
 		goto rollback;
 	}
@@ -2157,7 +2154,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	 * If LRO is enabled, check that the maximum aggregated packet
 	 * size is supported by the configured device.
 	 */
-	if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
 			dev->data->dev_conf.rxmode.max_lro_pkt_size =
 				dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -2752,21 +2749,21 @@ const char *
 rte_eth_link_speed_to_str(uint32_t link_speed)
 {
 	switch (link_speed) {
-	case ETH_SPEED_NUM_NONE: return "None";
-	case ETH_SPEED_NUM_10M:  return "10 Mbps";
-	case ETH_SPEED_NUM_100M: return "100 Mbps";
-	case ETH_SPEED_NUM_1G:   return "1 Gbps";
-	case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
-	case ETH_SPEED_NUM_5G:   return "5 Gbps";
-	case ETH_SPEED_NUM_10G:  return "10 Gbps";
-	case ETH_SPEED_NUM_20G:  return "20 Gbps";
-	case ETH_SPEED_NUM_25G:  return "25 Gbps";
-	case ETH_SPEED_NUM_40G:  return "40 Gbps";
-	case ETH_SPEED_NUM_50G:  return "50 Gbps";
-	case ETH_SPEED_NUM_56G:  return "56 Gbps";
-	case ETH_SPEED_NUM_100G: return "100 Gbps";
-	case ETH_SPEED_NUM_200G: return "200 Gbps";
-	case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+	case RTE_ETH_SPEED_NUM_NONE: return "None";
+	case RTE_ETH_SPEED_NUM_10M:  return "10 Mbps";
+	case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+	case RTE_ETH_SPEED_NUM_1G:   return "1 Gbps";
+	case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+	case RTE_ETH_SPEED_NUM_5G:   return "5 Gbps";
+	case RTE_ETH_SPEED_NUM_10G:  return "10 Gbps";
+	case RTE_ETH_SPEED_NUM_20G:  return "20 Gbps";
+	case RTE_ETH_SPEED_NUM_25G:  return "25 Gbps";
+	case RTE_ETH_SPEED_NUM_40G:  return "40 Gbps";
+	case RTE_ETH_SPEED_NUM_50G:  return "50 Gbps";
+	case RTE_ETH_SPEED_NUM_56G:  return "56 Gbps";
+	case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+	case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+	case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
 	default: return "Invalid";
 	}
 }
@@ -2790,14 +2787,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 		return -EINVAL;
 	}
 
-	if (eth_link->link_status == ETH_LINK_DOWN)
+	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		return snprintf(str, len, "Link down");
 	else
 		return snprintf(str, len, "Link up at %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
-			(eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
-			(eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 			"Autoneg" : "Fixed");
 }
 
@@ -3663,7 +3660,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 	dev = &rte_eth_devices[port_id];
 
 	if (!(dev->data->dev_conf.rxmode.offloads &
-	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	      RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_ETHDEV_LOG(ERR, "Port %u: vlan-filtering disabled\n",
 			port_id);
 		return -ENOSYS;
@@ -3750,44 +3747,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 	dev_offloads = orig_offloads;
 
 	/* check which option changed by application */
-	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
-		mask |= ETH_VLAN_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		mask |= RTE_ETH_VLAN_STRIP_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
-		mask |= ETH_VLAN_FILTER_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+		mask |= RTE_ETH_VLAN_FILTER_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+	cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
-		mask |= ETH_VLAN_EXTEND_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+		mask |= RTE_ETH_VLAN_EXTEND_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+	cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
-		mask |= ETH_QINQ_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+		mask |= RTE_ETH_QINQ_STRIP_MASK;
 	}
 
 	/*no change*/
@@ -3832,17 +3829,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
 	dev = &rte_eth_devices[port_id];
 	dev_offloads = &dev->data->dev_conf.rxmode.offloads;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		ret |= ETH_VLAN_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		ret |= ETH_VLAN_FILTER_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
-		ret |= ETH_VLAN_EXTEND_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+		ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
-		ret |= ETH_QINQ_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+		ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
 
 	return ret;
 }
@@ -3919,7 +3916,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+	if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
 		RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
@@ -4116,7 +4113,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4142,7 +4139,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4283,8 +4280,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 			port_id);
 		return -EINVAL;
 	}
-	if (pool >= ETH_64_POOLS) {
-		RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", ETH_64_POOLS - 1);
+	if (pool >= RTE_ETH_64_POOLS) {
+		RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", RTE_ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -4548,21 +4545,21 @@ rte_eth_mirror_rule_set(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (mirror_conf->dst_pool >= ETH_64_POOLS) {
+	if (mirror_conf->dst_pool >= RTE_ETH_64_POOLS) {
 		RTE_ETHDEV_LOG(ERR, "Invalid dst pool, pool id must be 0-%d\n",
-			ETH_64_POOLS - 1);
+			RTE_ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
-	if ((mirror_conf->rule_type & (ETH_MIRROR_VIRTUAL_POOL_UP |
-	     ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
+	if ((mirror_conf->rule_type & (RTE_ETH_MIRROR_VIRTUAL_POOL_UP |
+	     RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
 	    (mirror_conf->pool_mask == 0)) {
 		RTE_ETHDEV_LOG(ERR,
 			"Invalid mirror pool, pool mask can not be 0\n");
 		return -EINVAL;
 	}
 
-	if ((mirror_conf->rule_type & ETH_MIRROR_VLAN) &&
+	if ((mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) &&
 	    mirror_conf->vlan.vlan_mask == 0) {
 		RTE_ETHDEV_LOG(ERR,
 			"Invalid vlan mask, vlan mask can not be 0\n");
@@ -6238,7 +6235,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
 	rte_tel_data_add_dict_string(d, status_str, "UP");
 	rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
 	rte_tel_data_add_dict_string(d, "duplex",
-			(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex");
 	return 0;
 }
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351fdb..3e4109491316 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -249,7 +249,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
  * field is not supported, its value is 0.
  * All byte-related statistics do not include Ethernet FCS regardless
  * of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
  */
 struct rte_eth_stats {
 	uint64_t ipackets;  /**< Total number of successfully received packets. */
@@ -279,42 +279,74 @@ struct rte_eth_stats {
 /**
  * Device supported speeds bitmap flags
  */
-#define ETH_LINK_SPEED_AUTONEG  (0 <<  0)  /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED    (1 <<  0)  /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD   (1 <<  1)  /**<  10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M      (1 <<  2)  /**<  10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD  (1 <<  3)  /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M     (1 <<  4)  /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G       (1 <<  5)  /**<   1 Gbps */
-#define ETH_LINK_SPEED_2_5G     (1 <<  6)  /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G       (1 <<  7)  /**<   5 Gbps */
-#define ETH_LINK_SPEED_10G      (1 <<  8)  /**<  10 Gbps */
-#define ETH_LINK_SPEED_20G      (1 <<  9)  /**<  20 Gbps */
-#define ETH_LINK_SPEED_25G      (1 << 10)  /**<  25 Gbps */
-#define ETH_LINK_SPEED_40G      (1 << 11)  /**<  40 Gbps */
-#define ETH_LINK_SPEED_50G      (1 << 12)  /**<  50 Gbps */
-#define ETH_LINK_SPEED_56G      (1 << 13)  /**<  56 Gbps */
-#define ETH_LINK_SPEED_100G     (1 << 14)  /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G     (1 << 15)  /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG  (0 <<  0)  /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG	RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED    (1 <<  0)  /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED	RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD   (1 <<  1)  /**<  10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD	RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M      (1 <<  2)  /**<  10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M	RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD  (1 <<  3)  /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD	RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M     (1 <<  4)  /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M	RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G       (1 <<  5)  /**<   1 Gbps */
+#define ETH_LINK_SPEED_1G	RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G     (1 <<  6)  /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G	RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G       (1 <<  7)  /**<   5 Gbps */
+#define ETH_LINK_SPEED_5G	RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G      (1 <<  8)  /**<  10 Gbps */
+#define ETH_LINK_SPEED_10G	RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G      (1 <<  9)  /**<  20 Gbps */
+#define ETH_LINK_SPEED_20G	RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G      (1 << 10)  /**<  25 Gbps */
+#define ETH_LINK_SPEED_25G	RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G      (1 << 11)  /**<  40 Gbps */
+#define ETH_LINK_SPEED_40G	RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G      (1 << 12)  /**<  50 Gbps */
+#define ETH_LINK_SPEED_50G	RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G      (1 << 13)  /**<  56 Gbps */
+#define ETH_LINK_SPEED_56G	RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G     (1 << 14)  /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G	RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G     (1 << 15)  /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G	RTE_ETH_LINK_SPEED_200G
 
 /**
  * Ethernet numeric link speeds in Mbps
  */
-#define ETH_SPEED_NUM_NONE         0 /**< Not defined */
-#define ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
-#define ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
-#define ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
-#define ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
-#define ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
-#define ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
-#define ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
-#define ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
-#define ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
-#define ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE         0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE	RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
+#define ETH_SPEED_NUM_10M	RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M	RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
+#define ETH_SPEED_NUM_1G	RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G	RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
+#define ETH_SPEED_NUM_5G	RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
+#define ETH_SPEED_NUM_10G	RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
+#define ETH_SPEED_NUM_20G	RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
+#define ETH_SPEED_NUM_25G	RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
+#define ETH_SPEED_NUM_40G	RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
+#define ETH_SPEED_NUM_50G	RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
+#define ETH_SPEED_NUM_56G	RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G	RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G	RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN	RTE_ETH_SPEED_NUM_UNKNOWN
 
 /**
  * A structure used to retrieve link-level information of an Ethernet port.
@@ -328,12 +360,18 @@ struct rte_eth_link {
 } __rte_aligned(8);      /**< aligned for atomic64 read/write */
 
 /* Utility constants */
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP          1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX	RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX	RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN		RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP          1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP		RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED		RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG	RTE_ETH_LINK_AUTONEG
 #define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
 
 /**
@@ -349,9 +387,12 @@ struct rte_eth_thresh {
 /**
  *  Simple flags are used for rte_eth_conf.rxmode.mq_mode.
  */
-#define ETH_MQ_RX_RSS_FLAG  0x1
-#define ETH_MQ_RX_DCB_FLAG  0x2
-#define ETH_MQ_RX_VMDQ_FLAG 0x4
+#define RTE_ETH_MQ_RX_RSS_FLAG  0x1
+#define ETH_MQ_RX_RSS_FLAG	RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG  0x2
+#define ETH_MQ_RX_DCB_FLAG	RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG	RTE_ETH_MQ_RX_VMDQ_FLAG
 
 /**
  *  A set of values to identify what method is to be used to route
@@ -359,50 +400,49 @@ struct rte_eth_thresh {
  */
 enum rte_eth_rx_mq_mode {
 	/** None of DCB,RSS or VMDQ mode */
-	ETH_MQ_RX_NONE = 0,
+	RTE_ETH_MQ_RX_NONE = 0,
 
 	/** For RX side, only RSS is on */
-	ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+	RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
 	/** For RX side,only DCB is on. */
-	ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Both DCB and RSS enable */
-	ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 
 	/** Only VMDQ, no RSS nor DCB */
-	ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** RSS mode with VMDQ */
-	ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** Use VMDQ+DCB to route traffic to queues */
-	ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Enable both VMDQ and DCB in VMDq */
-	ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
-				 ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+				 RTE_ETH_MQ_RX_VMDQ_FLAG,
 };
 
-/**
- * for rx mq mode backward compatible
- */
-#define ETH_RSS                       ETH_MQ_RX_RSS
-#define VMDQ_DCB                      ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX                    ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE		RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS		RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB		RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS	RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY	RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS	RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB	RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS	RTE_ETH_MQ_RX_VMDQ_DCB_RSS
 
 /**
  * A set of values to identify what method is to be used to transmit
  * packets using multi-TCs.
  */
 enum rte_eth_tx_mq_mode {
-	ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
-	ETH_MQ_TX_DCB,          /**< For TX side,only DCB is on. */
-	ETH_MQ_TX_VMDQ_DCB,	/**< For TX side,both DCB and VT is on. */
-	ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
+	RTE_ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
+	RTE_ETH_MQ_TX_DCB,          /**< For TX side,only DCB is on. */
+	RTE_ETH_MQ_TX_VMDQ_DCB,	/**< For TX side,both DCB and VT is on. */
+	RTE_ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
 };
-
-/**
- * for tx mq mode backward compatible
- */
-#define ETH_DCB_NONE                ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX             ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX                  ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE		RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB		RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB	RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY	RTE_ETH_MQ_TX_VMDQ_ONLY
 
 /**
  * A structure used to configure the RX features of an Ethernet port.
@@ -415,7 +455,7 @@ struct rte_eth_rxmode {
 	uint32_t max_lro_pkt_size;
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
 	/**
-	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -430,12 +470,17 @@ struct rte_eth_rxmode {
  * Note that single VLAN is treated the same as inner VLAN.
  */
 enum rte_vlan_type {
-	ETH_VLAN_TYPE_UNKNOWN = 0,
-	ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
-	ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
-	ETH_VLAN_TYPE_MAX,
+	RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+	RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+	RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+	RTE_ETH_VLAN_TYPE_MAX,
 };
 
+#define ETH_VLAN_TYPE_UNKNOWN	RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER	RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER	RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX	RTE_ETH_VLAN_TYPE_MAX
+
 /**
  * A structure used to describe a vlan filter.
  * If the bit corresponding to a VID is set, such VID is on.
@@ -506,37 +551,68 @@ struct rte_eth_rss_conf {
  * Below macros are defined for RSS offload types, they can be used to
  * fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
  */
-#define ETH_RSS_IPV4               (1ULL << 2)
-#define ETH_RSS_FRAG_IPV4          (1ULL << 3)
-#define ETH_RSS_NONFRAG_IPV4_TCP   (1ULL << 4)
-#define ETH_RSS_NONFRAG_IPV4_UDP   (1ULL << 5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP  (1ULL << 6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
-#define ETH_RSS_IPV6               (1ULL << 8)
-#define ETH_RSS_FRAG_IPV6          (1ULL << 9)
-#define ETH_RSS_NONFRAG_IPV6_TCP   (1ULL << 10)
-#define ETH_RSS_NONFRAG_IPV6_UDP   (1ULL << 11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP  (1ULL << 12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
-#define ETH_RSS_L2_PAYLOAD         (1ULL << 14)
-#define ETH_RSS_IPV6_EX            (1ULL << 15)
-#define ETH_RSS_IPV6_TCP_EX        (1ULL << 16)
-#define ETH_RSS_IPV6_UDP_EX        (1ULL << 17)
-#define ETH_RSS_PORT               (1ULL << 18)
-#define ETH_RSS_VXLAN              (1ULL << 19)
-#define ETH_RSS_GENEVE             (1ULL << 20)
-#define ETH_RSS_NVGRE              (1ULL << 21)
-#define ETH_RSS_GTPU               (1ULL << 23)
-#define ETH_RSS_ETH                (1ULL << 24)
-#define ETH_RSS_S_VLAN             (1ULL << 25)
-#define ETH_RSS_C_VLAN             (1ULL << 26)
-#define ETH_RSS_ESP                (1ULL << 27)
-#define ETH_RSS_AH                 (1ULL << 28)
-#define ETH_RSS_L2TPV3             (1ULL << 29)
-#define ETH_RSS_PFCP               (1ULL << 30)
-#define ETH_RSS_PPPOE		   (1ULL << 31)
-#define ETH_RSS_ECPRI		   (1ULL << 32)
-#define ETH_RSS_MPLS		   (1ULL << 33)
+#define RTE_ETH_RSS_IPV4               (1ULL << 2)
+#define ETH_RSS_IPV4		RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4          (1ULL << 3)
+#define ETH_RSS_FRAG_IPV4	RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP   (1ULL << 4)
+#define ETH_RSS_NONFRAG_IPV4_TCP	RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP   (1ULL << 5)
+#define ETH_RSS_NONFRAG_IPV4_UDP	RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP  (1ULL << 6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP	RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER	RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6               (1ULL << 8)
+#define ETH_RSS_IPV6		RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6          (1ULL << 9)
+#define ETH_RSS_FRAG_IPV6	RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP   (1ULL << 10)
+#define ETH_RSS_NONFRAG_IPV6_TCP	RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP   (1ULL << 11)
+#define ETH_RSS_NONFRAG_IPV6_UDP	RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP  (1ULL << 12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP	RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER	RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD         (1ULL << 14)
+#define ETH_RSS_L2_PAYLOAD	RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX            (1ULL << 15)
+#define ETH_RSS_IPV6_EX		RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX        (1ULL << 16)
+#define ETH_RSS_IPV6_TCP_EX	RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX        (1ULL << 17)
+#define ETH_RSS_IPV6_UDP_EX	RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT               (1ULL << 18)
+#define ETH_RSS_PORT		RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN              (1ULL << 19)
+#define ETH_RSS_VXLAN		RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE             (1ULL << 20)
+#define ETH_RSS_GENEVE		RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE              (1ULL << 21)
+#define ETH_RSS_NVGRE		RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU               (1ULL << 23)
+#define ETH_RSS_GTPU		RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH                (1ULL << 24)
+#define ETH_RSS_ETH		RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN             (1ULL << 25)
+#define ETH_RSS_S_VLAN		RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN             (1ULL << 26)
+#define ETH_RSS_C_VLAN		RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP                (1ULL << 27)
+#define ETH_RSS_ESP		RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH                 (1ULL << 28)
+#define ETH_RSS_AH		RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3             (1ULL << 29)
+#define ETH_RSS_L2TPV3		RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP               (1ULL << 30)
+#define ETH_RSS_PFCP		RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE              (1ULL << 31)
+#define ETH_RSS_PPPOE		RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI              (1ULL << 32)
+#define ETH_RSS_ECPRI		RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS               (1ULL << 33)
+#define ETH_RSS_MPLS		RTE_ETH_RSS_MPLS
 
 /*
  * We use the following macros to combine with above ETH_RSS_* for
@@ -547,12 +623,18 @@ struct rte_eth_rss_conf {
  * the same level are used simultaneously, it is the same case as none of
  * them are added.
  */
-#define ETH_RSS_L3_SRC_ONLY        (1ULL << 63)
-#define ETH_RSS_L3_DST_ONLY        (1ULL << 62)
-#define ETH_RSS_L4_SRC_ONLY        (1ULL << 61)
-#define ETH_RSS_L4_DST_ONLY        (1ULL << 60)
-#define ETH_RSS_L2_SRC_ONLY        (1ULL << 59)
-#define ETH_RSS_L2_DST_ONLY        (1ULL << 58)
+#define RTE_ETH_RSS_L3_SRC_ONLY        (1ULL << 63)
+#define ETH_RSS_L3_SRC_ONLY	RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY        (1ULL << 62)
+#define ETH_RSS_L3_DST_ONLY	RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY        (1ULL << 61)
+#define ETH_RSS_L4_SRC_ONLY	RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY        (1ULL << 60)
+#define ETH_RSS_L4_DST_ONLY	RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY        (1ULL << 59)
+#define ETH_RSS_L2_SRC_ONLY	RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY        (1ULL << 58)
+#define ETH_RSS_L2_DST_ONLY	RTE_ETH_RSS_L2_DST_ONLY
 
 /*
  * Only select IPV6 address prefix as RSS input set according to
@@ -580,22 +662,27 @@ struct rte_eth_rss_conf {
  * It basically stands for the innermost encapsulation level RSS
  * can be performed on according to PMD and device capabilities.
  */
-#define ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT	RTE_ETH_RSS_LEVEL_PMD_DEFAULT
 
 /**
  * level 1, requests RSS to be performed on the outermost packet
  * encapsulation level.
  */
-#define ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST	RTE_ETH_RSS_LEVEL_OUTERMOST
 
 /**
  * level 2, requests RSS to be performed on the specified inner packet
  * encapsulation level, from outermost to innermost (lower to higher values).
  */
-#define ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST	RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK	RTE_ETH_RSS_LEVEL_MASK
 
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf)	RTE_ETH_RSS_LEVEL(rss_hf)
 
 /**
  * For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -619,213 +706,277 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
 	return rss_hf;
 }
 
-#define ETH_RSS_IPV6_PRE32 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32	RTE_ETH_RSS_IPV6_PRE32
 
-#define ETH_RSS_IPV6_PRE40 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40	RTE_ETH_RSS_IPV6_PRE40
 
-#define ETH_RSS_IPV6_PRE48 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48	RTE_ETH_RSS_IPV6_PRE48
 
-#define ETH_RSS_IPV6_PRE56 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56	RTE_ETH_RSS_IPV6_PRE56
 
-#define ETH_RSS_IPV6_PRE64 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64	RTE_ETH_RSS_IPV6_PRE64
 
-#define ETH_RSS_IPV6_PRE96 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96	RTE_ETH_RSS_IPV6_PRE96
 
-#define ETH_RSS_IPV6_PRE32_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP	RTE_ETH_RSS_IPV6_PRE32_UDP
 
-#define ETH_RSS_IPV6_PRE40_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP	RTE_ETH_RSS_IPV6_PRE40_UDP
 
-#define ETH_RSS_IPV6_PRE48_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP	RTE_ETH_RSS_IPV6_PRE48_UDP
 
-#define ETH_RSS_IPV6_PRE56_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP	RTE_ETH_RSS_IPV6_PRE56_UDP
 
-#define ETH_RSS_IPV6_PRE64_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP	RTE_ETH_RSS_IPV6_PRE64_UDP
 
-#define ETH_RSS_IPV6_PRE96_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP	RTE_ETH_RSS_IPV6_PRE96_UDP
 
-#define ETH_RSS_IPV6_PRE32_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP	RTE_ETH_RSS_IPV6_PRE32_TCP
 
-#define ETH_RSS_IPV6_PRE40_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP	RTE_ETH_RSS_IPV6_PRE40_TCP
 
-#define ETH_RSS_IPV6_PRE48_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP	RTE_ETH_RSS_IPV6_PRE48_TCP
 
-#define ETH_RSS_IPV6_PRE56_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP	RTE_ETH_RSS_IPV6_PRE56_TCP
 
-#define ETH_RSS_IPV6_PRE64_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP	RTE_ETH_RSS_IPV6_PRE64_TCP
 
-#define ETH_RSS_IPV6_PRE96_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP	RTE_ETH_RSS_IPV6_PRE96_TCP
 
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP	RTE_ETH_RSS_IPV6_PRE32_SCTP
 
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP	RTE_ETH_RSS_IPV6_PRE40_SCTP
 
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP	RTE_ETH_RSS_IPV6_PRE48_SCTP
 
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP	RTE_ETH_RSS_IPV6_PRE56_SCTP
 
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP	RTE_ETH_RSS_IPV6_PRE64_SCTP
 
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
-	ETH_RSS_VXLAN  | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
-	ETH_RSS_S_VLAN  | \
-	ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP	RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP	RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP	RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP	RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP	RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+	RTE_ETH_RSS_VXLAN  | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL	RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+	RTE_ETH_RSS_S_VLAN  | \
+	RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN	RTE_ETH_RSS_VLAN
 
 /**< Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX | \
-	ETH_RSS_PORT  | \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE | \
-	ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX | \
+	RTE_ETH_RSS_PORT  | \
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE | \
+	RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK	RTE_ETH_RSS_PROTO_MASK
 
 /*
  * Definitions used for redirection table entry size.
  * Some RSS RETA sizes may not be supported by some drivers, check the
  * documentation or the description of relevant functions for more details.
  */
-#define ETH_RSS_RETA_SIZE_64  64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE   64
+#define RTE_ETH_RSS_RETA_SIZE_64  64
+#define ETH_RSS_RETA_SIZE_64	RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128	RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256	RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512	RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE   64
+#define RTE_RETA_GROUP_SIZE	RTE_ETH_RETA_GROUP_SIZE
 
 /* Definitions used for VMDQ and DCB functionality */
-#define ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDQ vlan filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDQ DCB queues. */
-#define ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDQ vlan filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS	RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES	RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDQ DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES	RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES	RTE_ETH_DCB_NUM_QUEUES
 
 /* DCB capability defines */
-#define ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT	RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT	RTE_ETH_DCB_PFC_SUPPORT
 
 /* Definitions used for VLAN Offload functionality */
-#define ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
+#define RTE_ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD	RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD	RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD	RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD	RTE_ETH_QINQ_STRIP_OFFLOAD
 
 /* Definitions used for mask VLAN setting */
-#define ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
-#define ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
-#define ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
-#define ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
-#define ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
+#define ETH_VLAN_STRIP_MASK	RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
+#define ETH_VLAN_FILTER_MASK	RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
+#define ETH_VLAN_EXTEND_MASK	RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
+#define ETH_QINQ_STRIP_MASK	RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX		RTE_ETH_VLAN_ID_MAX
 
 /* Definitions used for receive MAC address   */
-#define ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR	RTE_ETH_NUM_RECEIVE_MAC_ADDR
 
 /* Definitions used for unicast hash  */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY	RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
 
 /* Definitions used for VMDQ pool rx mode setting */
-#define ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG	RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC	RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC	RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST	RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST	RTE_ETH_VMDQ_ACCEPT_MULTICAST
 
 /** Maximum nb. of vlan per mirror rule */
-#define ETH_MIRROR_MAX_VLANS       64
+#define RTE_ETH_MIRROR_MAX_VLANS       64
+#define ETH_MIRROR_MAX_VLANS	RTE_ETH_MIRROR_MAX_VLANS
 
-#define ETH_MIRROR_VIRTUAL_POOL_UP     0x01  /**< Virtual Pool uplink Mirroring. */
-#define ETH_MIRROR_UPLINK_PORT         0x02  /**< Uplink Port Mirroring. */
-#define ETH_MIRROR_DOWNLINK_PORT       0x04  /**< Downlink Port Mirroring. */
-#define ETH_MIRROR_VLAN                0x08  /**< VLAN Mirroring. */
-#define ETH_MIRROR_VIRTUAL_POOL_DOWN   0x10  /**< Virtual Pool downlink Mirroring. */
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_UP     0x01  /**< Virtual Pool uplink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_UP	RTE_ETH_MIRROR_VIRTUAL_POOL_UP
+#define RTE_ETH_MIRROR_UPLINK_PORT         0x02  /**< Uplink Port Mirroring. */
+#define ETH_MIRROR_UPLINK_PORT	RTE_ETH_MIRROR_UPLINK_PORT
+#define RTE_ETH_MIRROR_DOWNLINK_PORT       0x04  /**< Downlink Port Mirroring. */
+#define ETH_MIRROR_DOWNLINK_PORT	RTE_ETH_MIRROR_DOWNLINK_PORT
+#define RTE_ETH_MIRROR_VLAN                0x08  /**< VLAN Mirroring. */
+#define ETH_MIRROR_VLAN		RTE_ETH_MIRROR_VLAN
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN   0x10  /**< Virtual Pool downlink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_DOWN	RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN
 
 /**
  * A structure used to configure VLAN traffic mirror of an Ethernet port.
@@ -865,20 +1016,26 @@ struct rte_eth_rss_reta_entry64 {
  * in DCB configurations
  */
 enum rte_eth_nb_tcs {
-	ETH_4_TCS = 4, /**< 4 TCs with DCB. */
-	ETH_8_TCS = 8  /**< 8 TCs with DCB. */
+	RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+	RTE_ETH_8_TCS = 8  /**< 8 TCs with DCB. */
 };
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
 
 /**
  * This enum indicates the possible number of queue pools
  * in VMDQ configurations.
  */
 enum rte_eth_nb_pools {
-	ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
-	ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
-	ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
-	ETH_64_POOLS = 64   /**< 64 VMDq pools. */
+	RTE_ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
+	RTE_ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
+	RTE_ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
+	RTE_ETH_64_POOLS = 64   /**< 64 VMDq pools. */
 };
+#define ETH_8_POOLS	RTE_ETH_8_POOLS
+#define ETH_16_POOLS	RTE_ETH_16_POOLS
+#define ETH_32_POOLS	RTE_ETH_32_POOLS
+#define ETH_64_POOLS	RTE_ETH_64_POOLS
 
 /* This structure may be extended in future. */
 struct rte_eth_dcb_rx_conf {
@@ -964,7 +1121,7 @@ struct rte_eth_vmdq_rx_conf {
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
 	/**
-	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -1048,7 +1205,7 @@ struct rte_eth_rxconf {
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */
 	/**
-	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1077,7 +1234,7 @@ struct rte_eth_txconf {
 
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	/**
-	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1188,12 +1345,17 @@ struct rte_eth_desc_lim {
  * This enum indicates the flow control mode
  */
 enum rte_eth_fc_mode {
-	RTE_FC_NONE = 0, /**< Disable flow control. */
-	RTE_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
-	RTE_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
-	RTE_FC_FULL      /**< Enable flow control on both side. */
+	RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+	RTE_ETH_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
+	RTE_ETH_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
+	RTE_ETH_FC_FULL      /**< Enable flow control on both side. */
 };
 
+#define RTE_FC_NONE	RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE	RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE	RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL	RTE_ETH_FC_FULL
+
 /**
  * A structure used to configure Ethernet flow control parameter.
  * These parameters will be configured into the register of the NIC.
@@ -1224,18 +1386,29 @@ struct rte_eth_pfc_conf {
  * @see rte_eth_udp_tunnel
  */
 enum rte_eth_tunnel_type {
-	RTE_TUNNEL_TYPE_NONE = 0,
-	RTE_TUNNEL_TYPE_VXLAN,
-	RTE_TUNNEL_TYPE_GENEVE,
-	RTE_TUNNEL_TYPE_TEREDO,
-	RTE_TUNNEL_TYPE_NVGRE,
-	RTE_TUNNEL_TYPE_IP_IN_GRE,
-	RTE_L2_TUNNEL_TYPE_E_TAG,
-	RTE_TUNNEL_TYPE_VXLAN_GPE,
-	RTE_TUNNEL_TYPE_ECPRI,
-	RTE_TUNNEL_TYPE_MAX,
+	RTE_ETH_TUNNEL_TYPE_NONE = 0,
+	RTE_ETH_TUNNEL_TYPE_VXLAN,
+	RTE_ETH_TUNNEL_TYPE_GENEVE,
+	RTE_ETH_TUNNEL_TYPE_TEREDO,
+	RTE_ETH_TUNNEL_TYPE_NVGRE,
+	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+	RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+	RTE_ETH_TUNNEL_TYPE_ECPRI,
+	RTE_ETH_TUNNEL_TYPE_MAX,
 };
 
+#define RTE_TUNNEL_TYPE_NONE		RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN		RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE		RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO		RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE		RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG	RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI		RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX		RTE_ETH_TUNNEL_TYPE_MAX
+
 /* Deprecated API file for rte_eth_dev_filter_* functions */
 #include "rte_eth_ctrl.h"
 
@@ -1243,11 +1416,16 @@ enum rte_eth_tunnel_type {
  *  Memory space that can be configured to store Flow Director filters
  *  in the board memory.
  */
-enum rte_fdir_pballoc_type {
-	RTE_FDIR_PBALLOC_64K = 0,  /**< 64k. */
-	RTE_FDIR_PBALLOC_128K,     /**< 128k. */
-	RTE_FDIR_PBALLOC_256K,     /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+	RTE_ETH_FDIR_PBALLOC_64K = 0,  /**< 64k. */
+	RTE_ETH_FDIR_PBALLOC_128K,     /**< 128k. */
+	RTE_ETH_FDIR_PBALLOC_256K,     /**< 256k. */
 };
+#define rte_fdir_pballoc_type	rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K	RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K	RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K	RTE_ETH_FDIR_PBALLOC_256K
 
 /**
  *  Select report mode of FDIR hash information in RX descriptors.
@@ -1264,9 +1442,9 @@ enum rte_fdir_status_mode {
  *
  * If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
  */
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
 	enum rte_fdir_mode mode; /**< Flow Director mode. */
-	enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+	enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
 	enum rte_fdir_status_mode status;  /**< How to report FDIR hash. */
 	/** RX queue of packets matching a "drop" filter in perfect mode. */
 	uint8_t drop_queue;
@@ -1275,6 +1453,8 @@ struct rte_fdir_conf {
 	/**< Flex payload configuration. */
 };
 
+#define rte_fdir_conf rte_eth_fdir_conf
+
 /**
  * UDP tunneling configuration.
  *
@@ -1292,7 +1472,7 @@ struct rte_eth_udp_tunnel {
 /**
  * A structure used to enable/disable specific device interrupts.
  */
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
 	/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
 	uint32_t lsc:1;
 	/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1301,6 +1481,8 @@ struct rte_intr_conf {
 	uint32_t rmv:1;
 };
 
+#define rte_intr_conf rte_eth_intr_conf
+
 /**
  * A structure used to configure an Ethernet port.
  * Depending upon the RX multi-queue mode, extra advanced
@@ -1348,39 +1530,60 @@ struct rte_eth_conf {
 /**
  * RX offload capabilities of a device.
  */
-#define DEV_RX_OFFLOAD_VLAN_STRIP  0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO     0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
-#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP  0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP	RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM   0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM	RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM   0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM	RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO     0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO		RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP  0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP	RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP	RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER	RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	RTE_ETH_RX_OFFLOAD_JUMBO_FRAME
+#define RTE_ETH_RX_OFFLOAD_SCATTER	0x00002000
+#define DEV_RX_OFFLOAD_SCATTER		RTE_ETH_RX_OFFLOAD_SCATTER
 /**
  * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_RX_OFFLOAD_TIMESTAMP	0x00004000
-#define DEV_RX_OFFLOAD_SECURITY         0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC		0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM	0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH		0x00080000
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP	0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP	RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY     0x00008000
+#define DEV_RX_OFFLOAD_SECURITY		RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC	0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC		RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH	0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH	RTE_ETH_RX_OFFLOAD_RSS_HASH
 #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
 
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				 DEV_RX_OFFLOAD_UDP_CKSUM | \
-				 DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			     DEV_RX_OFFLOAD_VLAN_FILTER | \
-			     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-			     DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM	RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN	RTE_ETH_RX_OFFLOAD_VLAN
 
 /*
  * If new Rx offload capabilities are defined, they also must be
@@ -1390,52 +1593,74 @@ struct rte_eth_conf {
 /**
  * TX offload capabilities of a device.
  */
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM  0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO     0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO     0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT    0x00002000
-#define DEV_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT	RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM   0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM	RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM   0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM	RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO     0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO		RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO     0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO		RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT 0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT	RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO	RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT    0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT	RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
-#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS	RTE_ETH_TX_OFFLOAD_MULTI_SEGS
 /**< Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 /**< Device supports optimization for fast release of mbufs.
  *   When set application must guarantee that per-queue all mbufs comes from
  *   the same mempool and has refcnt = 1.
  */
-#define DEV_TX_OFFLOAD_SECURITY         0x00020000
+#define RTE_ETH_TX_OFFLOAD_SECURITY         0x00020000
+#define DEV_TX_OFFLOAD_SECURITY	RTE_ETH_TX_OFFLOAD_SECURITY
 /**
  * Device supports generic UDP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO	RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
 /**
  * Device supports generic IP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO	RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
 /** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
 /**
  * Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP	RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
 /*
  * If new Tx offload capabilities are defined, they also must be
  * mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1672,8 +1897,10 @@ struct rte_eth_xstat_name {
 	char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
 };
 
-#define ETH_DCB_NUM_TCS    8
-#define ETH_MAX_VMDQ_POOL  64
+#define RTE_ETH_DCB_NUM_TCS    8
+#define ETH_DCB_NUM_TCS	RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL  64
+#define ETH_MAX_VMDQ_POOL	RTE_ETH_MAX_VMDQ_POOL
 
 /**
  * A structure used to get the information of queue and
@@ -1749,13 +1976,17 @@ struct rte_eth_fec_capa {
  */
 
 /**< l2 tunnel enable mask */
-#define ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define RTE_ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define ETH_L2_TUNNEL_ENABLE_MASK	RTE_ETH_L2_TUNNEL_ENABLE_MASK
 /**< l2 tunnel insertion mask */
-#define ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define RTE_ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define ETH_L2_TUNNEL_INSERTION_MASK	RTE_ETH_L2_TUNNEL_INSERTION_MASK
 /**< l2 tunnel stripping mask */
-#define ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define RTE_ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define ETH_L2_TUNNEL_STRIPPING_MASK	RTE_ETH_L2_TUNNEL_STRIPPING_MASK
 /**< l2 tunnel forwarding mask */
-#define ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define RTE_ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define ETH_L2_TUNNEL_FORWARDING_MASK	RTE_ETH_L2_TUNNEL_FORWARDING_MASK
 
 /**
  * Function type used for RX packet processing packet callbacks.
@@ -2075,7 +2306,7 @@ uint16_t rte_eth_dev_count_total(void);
 uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 
 /**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2085,7 +2316,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 const char *rte_eth_dev_rx_offload_name(uint64_t offload);
 
 /**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2179,7 +2410,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
  *   In addition it contains the hardware offloads features to activate using
- *   the DEV_RX_OFFLOAD_* flags.
+ *   the RTE_ETH_RX_OFFLOAD_* flags.
  *   If an offloading set in rx_conf->offloads
  *   hasn't been set in the input argument eth_conf->rxmode.offloads
  *   to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -5231,7 +5462,7 @@ static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
  * rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf*  buffers
  * of those packets whose transmission was effectively completed.
  *
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
  * invoke this function concurrently on the same tx queue without SW lock.
  * @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
  *
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc2e..8e6156a62aa9 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -154,7 +154,7 @@ struct rte_eth_dev_data {
 			/**< Device Ethernet link address.
 			 *   @see rte_eth_dev_release_port()
 			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+	uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
 			/**< Bitmap associating MAC addresses to pools. */
 	struct rte_ether_addr *hash_mac_addrs;
 			/**< Device Ethernet MAC addresses of hash filtering.
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 70f455d47d60..4152067368b8 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2593,7 +2593,7 @@ struct rte_flow_action_rss {
 	 * through.
 	 */
 	uint32_t level;
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len; /**< Hash key length in bytes. */
 	uint32_t queue_num; /**< Number of entries in @p queue. */
 	const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
 #include "gso_udp4.h"
 
 #define ILLEGAL_UDP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+	((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
 	 (ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
 
 #define ILLEGAL_TCP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+	((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
 		(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
 
 int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
 	ol_flags = pkt->ol_flags;
 
 	if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
-			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+			 (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
 	uint32_t gso_types;
 	/**< the bit mask of required GSO types. The GSO library
 	 * uses the same macros as that of describing device TX
-	 * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+	 * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
 	 * gso_types.
 	 *
 	 * For example, if applications want to segment TCP/IPv4
-	 * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+	 * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
 	 */
 	uint16_t gso_size;
 	/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f58102..50e611e887bf 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -192,7 +192,7 @@ extern "C" {
  * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
  * HW capability, At minimum, the PMD should support
  * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
  */
 #define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
 
@@ -215,7 +215,7 @@ extern "C" {
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
  * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
  * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
 #define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
 
@@ -258,7 +258,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
  * or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
@@ -271,7 +271,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
  * if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index 13f06d8ed25b..be43f8c328e1 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
  *   of the dynamic field to be registered:
  *   const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
  * - The application initializes the PMD, and asks for this feature
- *   at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ *   at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
  *   rxconf. This will make the PMD to register the field by calling
  *   rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
  *   stores the returned offset.
-- 
2.31.1


^ permalink raw reply	[relevance 1%]

* [dpdk-dev] [Bug 797] [dpdk-21.11]Segmentation fault when start txonly packet forward after set txpkts=40, 64 and txsplit=rand
@ 2021-08-27  1:47  4% bugzilla
  0 siblings, 0 replies; 200+ results
From: bugzilla @ 2021-08-27  1:47 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=797

            Bug ID: 797
           Summary: [dpdk-21.11]Segmentation fault when start txonly
                    packet forward after set txpkts=40,64 and txsplit=rand
           Product: DPDK
           Version: unspecified
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: testpmd
          Assignee: dev@dpdk.org
          Reporter: yux.jiang@intel.com
  Target Milestone: ---

Environment
DPDK version: 
commit fdab8f2e17493192d555cd88cf28b06269174326 (HEAD, origin/main)
Author: Thomas Monjalon <thomas@monjalon.net>
Date:   Sun Aug 8 21:26:58 2021 +0200    version: 21.11-rc0    Start a new
release cycle with empty release notes.    The ABI version becomes 22.0.
    The map files are updated to the new ABI major number (22).
    The ABI exceptions are dropped and CI ABI checks are disabled because
    compatibility is not preserved.    Signed-off-by: Thomas Monjalon
<thomas@monjalon.net>
    Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
    Acked-by: David Marchand <david.marchand@redhat.com>
Other software versions: 
OS: PRETTY_NAME="Fedora 34 (Server Edition)"/5.12.14-300.fc34.x86_64
Compiler: gcc version 11.1.1 20210531 (Red Hat 11.1.1-3) 
Hardware platform: Intel(R) Xeon(R) Gold 6252N CPU @ 2.30GHz
NIC hardware: Ethernet Controller X710 for 10GbE SFP+ 1572
NIC firmware: 
[root@fedora dpdk]# ethtool -i enp217s0f0
driver: i40e
version: 2.15.9
firmware-version: 8.30 0x8000a49d 1.2926.0

Test Setup
# Build dpdk
rm -rf x86_64-native-linuxapp-gccCC=gcc meson --werror -Denable_kmods=True 
-Dbuildtype=debug -Dlibdir=lib --default-library=static
x86_64-native-linuxapp-gccninja -C x86_64-native-linuxapp-gcc -j 40
# bind nic to vfio-pci and start testpmd
./usertools/dpdk-devbind.py -b vfio-pci d9:00.0 d9:00.1
x86_64-native-linuxapp-gcc/app/dpdk-testpmd  -l 1,2,3,4 -n 4
--file-prefix=dpdk_339481_20210823205202   -- -i
# Set txpkts and txsplit and start
testpmd> set fwd txonly
Set txonly packet forwarding mode
testpmd> set txpkts 40,64
testpmd> set txsplit rand
testpmd> start
testpmd> 

Show the output from the previous commands.
testpmd> start
txonly packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support
enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 1) -> TX P=1/Q=0 (socket 1) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00  txonly
packet forwarding packets/burst=32
  packet len=104 - nb packet segments=2
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
testpmd> Segmentation fault (core dumped)
[root@fedora dpdk]#

Expected Result
No any core dumped

Regression
Is this issue a regression: (Y/N) N
First test with such cmd

Stack Trace or Log
# if set txsplit on, there is no core dumped.
# gdb ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd
./core-lcore-worker-2-46341-1629784290
warning: Unable to find libthread_db matching inferior's thread library, thread
debugging will not be available.
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1,2,3,4
-n 4 --file-prefix=dpdk_'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x00000000005fbbc6 in copy_buf_to_pkt_segs (buf=0x3de3e3e <pkt_udp_hdr+6>,
len=2, pkt=0x166c5e400, offset=34) at ../app/test-pmd/txonly.c:86
86                      seg_buf = rte_pktmbuf_mtod(seg, char *);
[Current thread is 1 (Thread 0x7f47712a0400 (LWP 46344))]
Missing separate debuginfos, use: dnf debuginfo-install
elfutils-libelf-0.185-2.fc34.x86_64 glibc-2.33-5.fc34.x86_64
jansson-2.13.1-2.fc34.x86_64 libfdt-1.6.1-1.fc34.x86_64
libibverbs-34.0-3.fc34.x86_64 libnl3-3.5.0-6.fc34.x86_64
libpcap-1.10.1-1.fc34.x86_64 numactl-libs-2.0.14-3.fc34.x86_64
openssl-libs-1.1.1k-1.fc34.x86_64 zlib-1.2.11-26.fc34.x86_64
(gdb) bt
#0  0x00000000005fbbc6 in copy_buf_to_pkt_segs (buf=0x3de3e3e <pkt_udp_hdr+6>,
len=2, pkt=0x166c5e400, offset=34) at ../app/test-pmd/txonly.c:86
#1  0x00000000005fe51c in copy_buf_to_pkt (buf=0x3de3e38 <pkt_udp_hdr>, len=8,
pkt=0x166c5e400, offset=34) at ../app/test-pmd/txonly.c:100
#2  0x00000000005ff51b in pkt_burst_prepare (pkt=0x166c5e400, mbp=0x17f0f0180,
eth_hdr=0x7f477129b182, vlan_tci=0, vlan_tci_outer=0, ol_flags=0, idx=1,
fs=0x166bbb4c0) at ../app/test-pmd/txonly.c:251
#3  0x000000000060043f in pkt_burst_transmit (fs=0x166bbb4c0) at
../app/test-pmd/txonly.c:372
#4  0x00000000005f0362 in run_pkt_fwd_on_lcore (fc=0x17fb3adc0,
pkt_fwd=0x5ff7eb <pkt_burst_transmit>) at ../app/test-pmd/testpmd.c:2049
#5  0x00000000005f0452 in start_pkt_forward_on_core (fwd_arg=0x17fb3adc0) at
../app/test-pmd/testpmd.c:2075
#6  0x0000000000aa8910 in eal_thread_loop (arg=0x0) at
../lib/eal/linux/eal_thread.c:127
#7  0x00007f477254f299 in start_thread () from /lib64/libpthread.so.0
#8  0x00007f47724776a3 in clone () from /lib64/libc.so.6
(gdb)

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 01/38] common/sfc_efx/base: update MCDI headers
  @ 2021-08-27  6:56  2% ` Andrew Rybchenko
  0 siblings, 0 replies; 200+ results
From: Andrew Rybchenko @ 2021-08-27  6:56 UTC (permalink / raw)
  To: dev

Pickup new FW interface definitions.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
---
 drivers/common/sfc_efx/base/efx_regs_mcdi.h | 1211 ++++++++++++++++++-
 1 file changed, 1176 insertions(+), 35 deletions(-)

diff --git a/drivers/common/sfc_efx/base/efx_regs_mcdi.h b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
index a3c9f076ec..2daf825a36 100644
--- a/drivers/common/sfc_efx/base/efx_regs_mcdi.h
+++ b/drivers/common/sfc_efx/base/efx_regs_mcdi.h
@@ -492,6 +492,24 @@
  */
 #define	MAE_FIELD_SUPPORTED_MATCH_MASK 0x5
 
+/* MAE_CT_VNI_MODE enum: Controls the layout of the VNI input to the conntrack
+ * lookup. (Values are not arbitrary - constrained by table access ABI.)
+ */
+/* enum: The VNI input to the conntrack lookup will be zero. */
+#define	MAE_CT_VNI_MODE_ZERO 0x0
+/* enum: The VNI input to the conntrack lookup will be the VNI (VXLAN/Geneve)
+ * or VSID (NVGRE) field from the packet.
+ */
+#define	MAE_CT_VNI_MODE_VNI 0x1
+/* enum: The VNI input to the conntrack lookup will be the VLAN ID from the
+ * outermost VLAN tag (in bottom 12 bits; top 12 bits zero).
+ */
+#define	MAE_CT_VNI_MODE_1VLAN 0x2
+/* enum: The VNI input to the conntrack lookup will be the VLAN IDs from both
+ * VLAN tags (outermost in bottom 12 bits, innermost in top 12 bits).
+ */
+#define	MAE_CT_VNI_MODE_2VLAN 0x3
+
 /* MAE_FIELD enum: NB: this enum shares namespace with the support status enum.
  */
 /* enum: Source mport upon entering the MAE. */
@@ -617,7 +635,8 @@
 
 /* MAE_MCDI_ENCAP_TYPE enum: Encapsulation type. Defines how the payload will
  * be parsed to an inner frame. Other values are reserved. Unknown values
- * should be treated same as NONE.
+ * should be treated same as NONE. (Values are not arbitrary - constrained by
+ * table access ABI.)
  */
 #define	MAE_MCDI_ENCAP_TYPE_NONE 0x0 /* enum */
 /* enum: Don't assume enum aligns with support bitmask... */
@@ -634,6 +653,18 @@
 /* enum: Selects the virtual NIC plugged into the MAE switch */
 #define	MAE_MPORT_END_VNIC 0x2
 
+/* MAE_COUNTER_TYPE enum: The datapath maintains several sets of counters, each
+ * being associated with a different table. Note that the same counter ID may
+ * be allocated by different counter blocks, so e.g. AR counter 42 is different
+ * from CT counter 42. Generation counts are also type-specific. This value is
+ * also present in the header of streaming counter packets, in the IDENTIFIER
+ * field (see packetiser packet format definitions).
+ */
+/* enum: Action Rule counters - can be referenced in AR response. */
+#define	MAE_COUNTER_TYPE_AR 0x0
+/* enum: Conntrack counters - can be referenced in CT response. */
+#define	MAE_COUNTER_TYPE_CT 0x1
+
 /* MCDI_EVENT structuredef: The structure of an MCDI_EVENT on Siena/EF10/EF100
  * platforms
  */
@@ -4547,6 +4578,8 @@
 #define	MC_CMD_MEDIA_BASE_T 0x6
 /* enum: QSFP+. */
 #define	MC_CMD_MEDIA_QSFP_PLUS 0x7
+/* enum: DSFP. */
+#define	MC_CMD_MEDIA_DSFP 0x8
 #define	MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_OFST 48
 #define	MC_CMD_GET_PHY_CFG_OUT_MMD_MASK_LEN 4
 /* enum: Native clause 22 */
@@ -7823,11 +7856,16 @@
 /***********************************/
 /* MC_CMD_GET_PHY_MEDIA_INFO
  * Read media-specific data from PHY (e.g. SFP/SFP+ module ID information for
- * SFP+ PHYs). The 'media type' can be found via GET_PHY_CFG
- * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid 'page number' input values, and the
- * output data, are interpreted on a per-type basis. For SFP+: PAGE=0 or 1
+ * SFP+ PHYs). The "media type" can be found via GET_PHY_CFG
+ * (GET_PHY_CFG_OUT_MEDIA_TYPE); the valid "page number" input values, and the
+ * output data, are interpreted on a per-type basis. For SFP+, PAGE=0 or 1
  * returns a 128-byte block read from module I2C address 0xA0 offset 0 or 0x80.
- * Anything else: currently undefined. Locks required: None. Return code: 0.
+ * For QSFP, PAGE=-1 is the lower (unbanked) page. PAGE=2 is the EEPROM and
+ * PAGE=3 is the module limits. For DSFP, module addressing requires a
+ * "BANK:PAGE". Not every bank has the same number of pages. See the Common
+ * Management Interface Specification (CMIS) for further details. A BANK:PAGE
+ * of "0xffff:0xffff" retrieves the lower (unbanked) page. Locks required -
+ * None. Return code - 0.
  */
 #define	MC_CMD_GET_PHY_MEDIA_INFO 0x4b
 #define	MC_CMD_GET_PHY_MEDIA_INFO_MSGSET 0x4b
@@ -7839,6 +7877,12 @@
 #define	MC_CMD_GET_PHY_MEDIA_INFO_IN_LEN 4
 #define	MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_OFST 0
 #define	MC_CMD_GET_PHY_MEDIA_INFO_IN_PAGE_LEN 4
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_OFST 0
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_LBN 0
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_PAGE_WIDTH 16
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_OFST 0
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_LBN 16
+#define	MC_CMD_GET_PHY_MEDIA_INFO_IN_DSFP_BANK_WIDTH 16
 
 /* MC_CMD_GET_PHY_MEDIA_INFO_OUT msgresponse */
 #define	MC_CMD_GET_PHY_MEDIA_INFO_OUT_LENMIN 5
@@ -9350,6 +9394,8 @@
 #define	NVRAM_PARTITION_TYPE_FPGA_JUMP 0xb08
 /* enum: FPGA Validate XCLBIN */
 #define	NVRAM_PARTITION_TYPE_FPGA_XCLBIN_VALIDATE 0xb09
+/* enum: FPGA XOCL Configuration information */
+#define	NVRAM_PARTITION_TYPE_FPGA_XOCL_CONFIG 0xb0a
 /* enum: MUM firmware partition */
 #define	NVRAM_PARTITION_TYPE_MUM_FIRMWARE 0xc00
 /* enum: SUC firmware partition (this is intentionally an alias of
@@ -9427,6 +9473,8 @@
 #define	NVRAM_PARTITION_TYPE_BUNDLE_LOG 0x1e02
 /* enum: Partition for Solarflare gPXE bootrom installed via Bundle update. */
 #define	NVRAM_PARTITION_TYPE_EXPANSION_ROM_INTERNAL 0x1e03
+/* enum: Partition to store ASN.1 format Bundle Signature for checking. */
+#define	NVRAM_PARTITION_TYPE_BUNDLE_SIGNATURE 0x1e04
 /* enum: Test partition on SmartNIC system microcontroller (SUC) */
 #define	NVRAM_PARTITION_TYPE_SUC_TEST 0x1f00
 /* enum: System microcontroller access to primary FPGA flash. */
@@ -10051,6 +10099,158 @@
 #define	MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
 #define	MC_CMD_INIT_EVQ_V2_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
 
+/* MC_CMD_INIT_EVQ_V3_IN msgrequest: Extended request to specify per-queue
+ * event merge timeouts.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_LEN 556
+/* Size, in entries */
+#define	MC_CMD_INIT_EVQ_V3_IN_SIZE_OFST 0
+#define	MC_CMD_INIT_EVQ_V3_IN_SIZE_LEN 4
+/* Desired instance. Must be set to a specific instance, which is a function
+ * local queue index. The calling client must be the currently-assigned user of
+ * this VI (see MC_CMD_SET_VI_USER).
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_INSTANCE_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_IN_INSTANCE_LEN 4
+/* The initial timer value. The load value is ignored if the timer mode is DIS.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_OFST 8
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_LOAD_LEN 4
+/* The reload value is ignored in one-shot modes */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_OFST 12
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_RELOAD_LEN 4
+/* tbd */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAGS_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAGS_LEN 4
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_LBN 0
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INTERRUPTING_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_LBN 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RPTR_DOS_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_LBN 2
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_INT_ARMD_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_LBN 3
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_CUT_THRU_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_LBN 4
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_RX_MERGE_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_LBN 5
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TX_MERGE_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_LBN 6
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_USE_TIMER_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LBN 7
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_WIDTH 4
+/* enum: All initialisation flags specified by host. */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_MANUAL 0x0
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the lowest latency achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_LOW_LATENCY 0x1
+/* enum: MEDFORD only. Certain initialisation flags specified by host may be
+ * over-ridden by firmware based on licenses and firmware variant in order to
+ * provide the best throughput achievable. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_THROUGHPUT 0x2
+/* enum: MEDFORD only. Certain initialisation flags may be over-ridden by
+ * firmware based on licenses and firmware variant. See
+ * MC_CMD_INIT_EVQ_V2/MC_CMD_INIT_EVQ_V2_OUT/FLAGS for list of affected flags.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_TYPE_AUTO 0x3
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_OFST 16
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_LBN 11
+#define	MC_CMD_INIT_EVQ_V3_IN_FLAG_EXT_WIDTH_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_OFST 20
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_LEN 4
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_MODE_DIS 0x0
+/* enum: Immediate */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_IMMED_START 0x1
+/* enum: Triggered */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_TRIG_START 0x2
+/* enum: Hold-off */
+#define	MC_CMD_INIT_EVQ_V3_IN_TMR_INT_HLDOFF 0x3
+/* Target EVQ for wakeups if in wakeup mode. */
+#define	MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_OFST 24
+#define	MC_CMD_INIT_EVQ_V3_IN_TARGET_EVQ_LEN 4
+/* Target interrupt if in interrupting mode (note union with target EVQ). Use
+ * MC_CMD_RESOURCE_INSTANCE_ANY unless a specific one required for test
+ * purposes.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_OFST 24
+#define	MC_CMD_INIT_EVQ_V3_IN_IRQ_NUM_LEN 4
+/* Event Counter Mode. */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_OFST 28
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_LEN 4
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_DIS 0x0
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RX 0x1
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_TX 0x2
+/* enum: Disabled */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_MODE_RXTX 0x3
+/* Event queue packet count threshold. */
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_OFST 32
+#define	MC_CMD_INIT_EVQ_V3_IN_COUNT_THRSHLD_LEN 4
+/* 64-bit address of 4k of 4k-aligned host memory buffer */
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_OFST 36
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LEN 8
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_OFST 36
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LEN 4
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_LBN 288
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_LO_WIDTH 32
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_OFST 40
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LEN 4
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_LBN 320
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_HI_WIDTH 32
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MINNUM 1
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_EVQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
+/* Receive event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_RX_MERGE is not set.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_OFST 548
+#define	MC_CMD_INIT_EVQ_V3_IN_RX_MERGE_TIMEOUT_NS_LEN 4
+/* Transmit event merge timeout to configure, in nanoseconds. The valid range
+ * and granularity are device specific. Specify 0 to use the firmware's default
+ * value. This field is ignored and per-queue merging is disabled if
+ * MC_CMD_INIT_EVQ/MC_CMD_INIT_EVQ_IN/FLAG_TX_MERGE is not set.
+ */
+#define	MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_OFST 552
+#define	MC_CMD_INIT_EVQ_V3_IN_TX_MERGE_TIMEOUT_NS_LEN 4
+
+/* MC_CMD_INIT_EVQ_V3_OUT msgresponse */
+#define	MC_CMD_INIT_EVQ_V3_OUT_LEN 8
+/* Only valid if INTRFLAG was true */
+#define	MC_CMD_INIT_EVQ_V3_OUT_IRQ_OFST 0
+#define	MC_CMD_INIT_EVQ_V3_OUT_IRQ_LEN 4
+/* Actual configuration applied on the card */
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAGS_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAGS_LEN 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_LBN 0
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_CUT_THRU_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_LBN 1
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RX_MERGE_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_LBN 2
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_TX_MERGE_WIDTH 1
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_OFST 4
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_LBN 3
+#define	MC_CMD_INIT_EVQ_V3_OUT_FLAG_RXQ_FORCE_EV_MERGING_WIDTH 1
+
 /* QUEUE_CRC_MODE structuredef */
 #define	QUEUE_CRC_MODE_LEN 1
 #define	QUEUE_CRC_MODE_MODE_LBN 0
@@ -10256,7 +10456,9 @@
 #define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_NUM 64
+#define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MINNUM 0
+#define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_RXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
 #define	MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_OFST 540
 #define	MC_CMD_INIT_RXQ_EXT_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10360,7 +10562,9 @@
 #define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_NUM 64
+#define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MINNUM 0
+#define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_RXQ_V3_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
 #define	MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_OFST 540
 #define	MC_CMD_INIT_RXQ_V3_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10493,7 +10697,9 @@
 #define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_NUM 64
+#define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MINNUM 0
+#define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_RXQ_V4_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
 #define	MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_OFST 540
 #define	MC_CMD_INIT_RXQ_V4_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10639,7 +10845,9 @@
 #define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_NUM 64
+#define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MINNUM 0
+#define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM 64
+#define	MC_CMD_INIT_RXQ_V5_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Maximum length of packet to receive, if SNAPSHOT_MODE flag is set */
 #define	MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_OFST 540
 #define	MC_CMD_INIT_RXQ_V5_IN_SNAPSHOT_LENGTH_LEN 4
@@ -10878,7 +11086,7 @@
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LEN 4
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_LBN 256
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_HI_WIDTH 32
-#define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 1
+#define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MINNUM 0
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM 64
 #define	MC_CMD_INIT_TXQ_EXT_IN_DMA_ADDR_MAXNUM_MCDI2 64
 /* Flags related to Qbb flow control mode. */
@@ -12228,6 +12436,8 @@
  * rules inserted by MC_CMD_VNIC_ENCAP_RULE_ADD. (ef100 and later)
  */
 #define	MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_MATCHES 0x5
+/* enum: read the supported encapsulation types for the VNIC */
+#define	MC_CMD_GET_PARSER_DISP_INFO_IN_OP_GET_SUPPORTED_VNIC_ENCAP_TYPES 0x6
 
 /* MC_CMD_GET_PARSER_DISP_INFO_OUT msgresponse */
 #define	MC_CMD_GET_PARSER_DISP_INFO_OUT_LENMIN 8
@@ -12336,6 +12546,30 @@
 #define	MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM 61
 #define	MC_CMD_GET_PARSER_DISP_VNIC_ENCAP_MATCHES_OUT_SUPPORTED_MATCHES_MAXNUM_MCDI2 253
 
+/* MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT msgresponse: Returns
+ * the supported encapsulation types for the VNIC
+ */
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_LEN 8
+/* The op code OP_GET_SUPPORTED_VNIC_ENCAP_TYPES is returned */
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_OFST 0
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_OP_LEN 4
+/*            Enum values, see field(s): */
+/*               MC_CMD_GET_PARSER_DISP_INFO_IN/OP */
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define	MC_CMD_GET_PARSER_DISP_SUPPORTED_VNIC_ENCAP_TYPES_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+
 
 /***********************************/
 /* MC_CMD_PARSER_DISP_RW
@@ -16236,6 +16470,9 @@
 #define	MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
 #define	MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
 #define	MC_CMD_GET_CAPABILITIES_V7_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define	MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V7_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
 
 /* MC_CMD_GET_CAPABILITIES_V8_OUT msgresponse */
 #define	MC_CMD_GET_CAPABILITIES_V8_OUT_LEN 160
@@ -16734,6 +16971,9 @@
 #define	MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
 #define	MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
 #define	MC_CMD_GET_CAPABILITIES_V8_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define	MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V8_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
 /* These bits are reserved for communicating test-specific capabilities to
  * host-side test software. All production drivers should treat this field as
  * opaque.
@@ -17246,6 +17486,9 @@
 #define	MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
 #define	MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
 #define	MC_CMD_GET_CAPABILITIES_V9_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define	MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V9_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
 /* These bits are reserved for communicating test-specific capabilities to
  * host-side test software. All production drivers should treat this field as
  * opaque.
@@ -17793,6 +18036,9 @@
 #define	MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_OFST 148
 #define	MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_LBN 11
 #define	MC_CMD_GET_CAPABILITIES_V10_OUT_MAE_ACTION_SET_ALLOC_V2_SUPPORTED_WIDTH 1
+#define	MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_OFST 148
+#define	MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_LBN 12
+#define	MC_CMD_GET_CAPABILITIES_V10_OUT_RSS_STEER_ON_OUTER_SUPPORTED_WIDTH 1
 /* These bits are reserved for communicating test-specific capabilities to
  * host-side test software. All production drivers should treat this field as
  * opaque.
@@ -19900,6 +20146,18 @@
 #define	MC_CMD_GET_FUNCTION_INFO_OUT_VF_OFST 4
 #define	MC_CMD_GET_FUNCTION_INFO_OUT_VF_LEN 4
 
+/* MC_CMD_GET_FUNCTION_INFO_OUT_V2 msgresponse */
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_LEN 12
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_OFST 0
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_PF_LEN 4
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_OFST 4
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_VF_LEN 4
+/* Values from PCIE_INTERFACE enumeration. For NICs with a single interface, or
+ * in the case of a V1 response, this should be HOST_PRIMARY.
+ */
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_OFST 8
+#define	MC_CMD_GET_FUNCTION_INFO_OUT_V2_INTF_LEN 4
+
 
 /***********************************/
 /* MC_CMD_ENABLE_OFFLINE_BIST
@@ -25682,6 +25940,9 @@
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_OFST 0
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_LBN 6
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_USER_MARK_WIDTH 1
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_OFST 0
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_LBN 7
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_MPORT_WIDTH 1
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_OFST 0
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_LBN 7
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_INGRESS_VPORT_WIDTH 1
@@ -25691,6 +25952,12 @@
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_OFST 0
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_LBN 9
 #define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIP_TCI_WIDTH 1
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_OFST 0
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_LBN 10
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VLAN_STRIPPED_WIDTH 1
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_OFST 0
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_LBN 11
+#define	MC_CMD_GET_RX_PREFIX_ID_IN_VSWITCH_STATUS_WIDTH 1
 
 /* MC_CMD_GET_RX_PREFIX_ID_OUT msgresponse */
 #define	MC_CMD_GET_RX_PREFIX_ID_OUT_LENMIN 8
@@ -25736,9 +26003,12 @@
 #define	RX_PREFIX_FIELD_INFO_PARTIAL_TSTAMP 0x4 /* enum */
 #define	RX_PREFIX_FIELD_INFO_RSS_HASH 0x5 /* enum */
 #define	RX_PREFIX_FIELD_INFO_USER_MARK 0x6 /* enum */
+#define	RX_PREFIX_FIELD_INFO_INGRESS_MPORT 0x7 /* enum */
 #define	RX_PREFIX_FIELD_INFO_INGRESS_VPORT 0x7 /* enum */
 #define	RX_PREFIX_FIELD_INFO_CSUM_FRAME 0x8 /* enum */
 #define	RX_PREFIX_FIELD_INFO_VLAN_STRIP_TCI 0x9 /* enum */
+#define	RX_PREFIX_FIELD_INFO_VLAN_STRIPPED 0xa /* enum */
+#define	RX_PREFIX_FIELD_INFO_VSWITCH_STATUS 0xb /* enum */
 #define	RX_PREFIX_FIELD_INFO_TYPE_LBN 24
 #define	RX_PREFIX_FIELD_INFO_TYPE_WIDTH 8
 
@@ -26063,6 +26333,10 @@
 #define	MC_CMD_FPGA_IN_OP_SET_INTERNAL_LINK 0x5
 /* enum: Read internal link configuration. */
 #define	MC_CMD_FPGA_IN_OP_GET_INTERNAL_LINK 0x6
+/* enum: Get MAC statistics of FPGA external port. */
+#define	MC_CMD_FPGA_IN_OP_GET_MAC_STATS 0x7
+/* enum: Set configuration on internal FPGA MAC. */
+#define	MC_CMD_FPGA_IN_OP_SET_INTERNAL_MAC 0x8
 
 /* MC_CMD_FPGA_OP_GET_VERSION_IN msgrequest: Get the FPGA version string. A
  * free-format string is returned in response to this command. Any checks on
@@ -26206,6 +26480,87 @@
 #define	MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_OFST 4
 #define	MC_CMD_FPGA_OP_GET_INTERNAL_LINK_OUT_SPEED_LEN 4
 
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_IN msgrequest: Get FPGA external port MAC
+ * statistics.
+ */
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_IN_LEN 4
+/* Sub-command code. Must be OP_GET_MAC_STATS. */
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_OFST 0
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_IN_OP_LEN 4
+
+/* MC_CMD_FPGA_OP_GET_MAC_STATS_OUT msgresponse */
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMIN 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX 252
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_LEN(num) (4+8*(num))
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_NUM(len) (((len)-4)/8)
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_OFST 0
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_NUM_STATS_LEN 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_OFST 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LEN 8
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_OFST 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LEN 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_LBN 32
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_LO_WIDTH 32
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_OFST 8
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LEN 4
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_LBN 64
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_HI_WIDTH 32
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MINNUM 0
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM 31
+#define	MC_CMD_FPGA_OP_GET_MAC_STATS_OUT_STATISTICS_MAXNUM_MCDI2 127
+#define	MC_CMD_FPGA_MAC_TX_TOTAL_PACKETS 0x0 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_TOTAL_BYTES 0x1 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_PACKETS 0x2 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_TOTAL_GOOD_BYTES 0x3 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_BAD_FCS 0x4 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_PAUSE 0x5 /* enum */
+#define	MC_CMD_FPGA_MAC_TX_USER_PAUSE 0x6 /* enum */
+#define	MC_CMD_FPGA_MAC_RX_TOTAL_PACKETS 0x7 /* enum */
+#define	MC_CMD_FPGA_MAC_RX_TOTAL_BYTES 0x8 /* enum */
+#define	MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_PACKETS 0x9 /* enum */
+#define	MC_CMD_FPGA_MAC_RX_TOTAL_GOOD_BYTES 0xa /* enum */
+#define	MC_CMD_FPGA_MAC_RX_BAD_FCS 0xb /* enum */
+#define	MC_CMD_FPGA_MAC_RX_PAUSE 0xc /* enum */
+#define	MC_CMD_FPGA_MAC_RX_USER_PAUSE 0xd /* enum */
+#define	MC_CMD_FPGA_MAC_RX_UNDERSIZE 0xe /* enum */
+#define	MC_CMD_FPGA_MAC_RX_OVERSIZE 0xf /* enum */
+#define	MC_CMD_FPGA_MAC_RX_FRAMING_ERR 0x10 /* enum */
+#define	MC_CMD_FPGA_MAC_FEC_UNCORRECTED_ERRORS 0x11 /* enum */
+#define	MC_CMD_FPGA_MAC_FEC_CORRECTED_ERRORS 0x12 /* enum */
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN msgrequest: Configures the internal port
+ * MAC on the FPGA.
+ */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_LEN 20
+/* Sub-command code. Must be OP_SET_INTERNAL_MAC. */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_OFST 0
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_OP_LEN 4
+/* Select which parameters to configure. */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_OFST 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CONTROL_LEN 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_OFST 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_LBN 0
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_MTU_WIDTH 1
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_OFST 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_LBN 1
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_DRAIN_WIDTH 1
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_OFST 4
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_LBN 2
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_CFG_FCNTL_WIDTH 1
+/* The MTU to be programmed into the MAC. */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_OFST 8
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_MTU_LEN 4
+/* Drain Tx FIFO */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_OFST 12
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_DRAIN_LEN 4
+/* flow control configuration. See MC_CMD_SET_MAC/MC_CMD_SET_MAC_IN/FCNTL. */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_OFST 16
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_IN_FCNTL_LEN 4
+
+/* MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT msgresponse */
+#define	MC_CMD_FPGA_OP_SET_INTERNAL_MAC_OUT_LEN 0
+
 
 /***********************************/
 /* MC_CMD_EXTERNAL_MAE_GET_LINK_MODE
@@ -26483,6 +26838,12 @@
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_OFST 29
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_LBN 0
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STRIP_OUTER_VLAN_WIDTH 1
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_OFST 29
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_LBN 1
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_RSS_ON_OUTER_WIDTH 1
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_OFST 29
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_LBN 2
+#define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_STEER_ON_OUTER_WIDTH 1
 /* Only if MATCH_DST_PORT is set. Port number as bytes in network order. */
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_OFST 30
 #define	MC_CMD_VNIC_ENCAP_RULE_ADD_IN_DST_PORT_LEN 2
@@ -26544,6 +26905,257 @@
 #define	UUID_NODE_LBN 80
 #define	UUID_NODE_WIDTH 48
 
+
+/***********************************/
+/* MC_CMD_PLUGIN_ALLOC
+ * Create a handle to a datapath plugin's extension. This involves finding a
+ * currently-loaded plugin offering the given functionality (as identified by
+ * the UUID) and allocating a handle to track the usage of it. Plugin
+ * functionality is identified by 'extension' rather than any other identifier
+ * so that a single plugin bitfile may offer more than one piece of independent
+ * functionality. If two bitfiles are loaded which both offer the same
+ * extension, then the metadata is interrogated further to determine which is
+ * the newest and that is the one opened. See SF-123625-SW for architectural
+ * detail on datapath plugins.
+ */
+#define	MC_CMD_PLUGIN_ALLOC 0x1ad
+#define	MC_CMD_PLUGIN_ALLOC_MSGSET 0x1ad
+#undef	MC_CMD_0x1ad_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1ad_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_ALLOC_IN msgrequest */
+#define	MC_CMD_PLUGIN_ALLOC_IN_LEN 24
+/* The functionality requested of the plugin, as a UUID structure */
+#define	MC_CMD_PLUGIN_ALLOC_IN_UUID_OFST 0
+#define	MC_CMD_PLUGIN_ALLOC_IN_UUID_LEN 16
+/* Additional options for opening the handle */
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAGS_OFST 16
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAGS_LEN 4
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_OFST 16
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_LBN 0
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_INFO_ONLY_WIDTH 1
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_OFST 16
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_LBN 1
+#define	MC_CMD_PLUGIN_ALLOC_IN_FLAG_ALLOW_DISABLED_WIDTH 1
+/* Load the extension only if it is in the specified administrative group.
+ * Specify ANY to load the extension wherever it is found (if there are
+ * multiple choices then the extension with the highest MINOR_VER/PATCH_VER
+ * will be loaded). See MC_CMD_PLUGIN_GET_META_GLOBAL for a description of
+ * administrative groups.
+ */
+#define	MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_OFST 20
+#define	MC_CMD_PLUGIN_ALLOC_IN_ADMIN_GROUP_LEN 2
+/* enum: Load the extension from any ADMIN_GROUP. */
+#define	MC_CMD_PLUGIN_ALLOC_IN_ANY 0xffff
+/* Reserved */
+#define	MC_CMD_PLUGIN_ALLOC_IN_RESERVED_OFST 22
+#define	MC_CMD_PLUGIN_ALLOC_IN_RESERVED_LEN 2
+
+/* MC_CMD_PLUGIN_ALLOC_OUT msgresponse */
+#define	MC_CMD_PLUGIN_ALLOC_OUT_LEN 4
+/* Unique identifier of this usage */
+#define	MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_ALLOC_OUT_HANDLE_LEN 4
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_FREE
+ * Delete a handle to a plugin's extension.
+ */
+#define	MC_CMD_PLUGIN_FREE 0x1ae
+#define	MC_CMD_PLUGIN_FREE_MSGSET 0x1ae
+#undef	MC_CMD_0x1ae_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1ae_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_FREE_IN msgrequest */
+#define	MC_CMD_PLUGIN_FREE_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_FREE_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_FREE_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_FREE_OUT msgresponse */
+#define	MC_CMD_PLUGIN_FREE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_GLOBAL
+ * Returns the global metadata applying to the whole plugin extension. See the
+ * other metadata calls for subtypes of data.
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL 0x1af
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_MSGSET 0x1af
+#undef	MC_CMD_0x1af_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1af_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_IN msgrequest */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_IN_LEN 4
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_IN_HANDLE_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_GLOBAL_OUT msgresponse */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_LEN 36
+/* Unique identifier of this plugin extension. This is identical to the value
+ * which was requested when the handle was allocated.
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_UUID_LEN 16
+/* semver sub-version of this plugin extension */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_OFST 16
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MINOR_VER_LEN 2
+/* semver micro-version of this plugin extension */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_OFST 18
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PATCH_VER_LEN 2
+/* Number of different messages which can be sent to this extension */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_OFST 20
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_NUM_MSGS_LEN 4
+/* Byte offset within the VI window of the plugin's mapped CSR window. */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_OFST 24
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_OFFSET_LEN 2
+/* Number of bytes mapped through to the plugin's CSRs. 0 if that feature was
+ * not requested by the plugin (in which case MAPPED_CSR_OFFSET and
+ * MAPPED_CSR_FLAGS are ignored).
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_OFST 26
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_SIZE_LEN 2
+/* Flags indicating how to perform the CSR window mapping. */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_OFST 28
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAGS_LEN 4
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_OFST 28
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_LBN 0
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_READ_WIDTH 1
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_OFST 28
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_LBN 1
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_MAPPED_CSR_FLAG_WRITE_WIDTH 1
+/* Identifier of the set of extensions which all change state together.
+ * Extensions having the same ADMIN_GROUP will always load and unload at the
+ * same time. ADMIN_GROUP values themselves are arbitrary (but they contain a
+ * generation number as an implementation detail to ensure that they're not
+ * reused rapidly).
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_OFST 32
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_ADMIN_GROUP_LEN 1
+/* Bitshift in MC_CMD_DEVEL_CLIENT_PRIVILEGE_MODIFY's MASK parameters
+ * corresponding to this extension, i.e. set the bit 1<<PRIVILEGE_BIT to permit
+ * access to this extension.
+ */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_OFST 33
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_PRIVILEGE_BIT_LEN 1
+/* Reserved */
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_OFST 34
+#define	MC_CMD_PLUGIN_GET_META_GLOBAL_OUT_RESERVED_LEN 2
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER
+ * Returns metadata supplied by the plugin author which describes this
+ * extension in a human-readable way. Contrast with
+ * MC_CMD_PLUGIN_GET_META_GLOBAL, which returns information needed for software
+ * to operate.
+ */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER 0x1b0
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_MSGSET 0x1b0
+#undef	MC_CMD_0x1b0_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1b0_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_IN msgrequest */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_LEN 12
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_HANDLE_LEN 4
+/* Category of data to return */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_OFST 4
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_SUBTYPE_LEN 4
+/* enum: Top-level information about the extension. The returned data is an
+ * array of key/value pairs using the keys in RFC5013 (Dublin Core) to describe
+ * the extension. The data is a back-to-back list of zero-terminated strings;
+ * the even-numbered fields (0,2,4,...) are keys and their following odd-
+ * numbered fields are the corresponding values. Both keys and values are
+ * nominally UTF-8. Per RFC5013, the same key may be repeated any number of
+ * times. Note that all information (including the key/value structure itself
+ * and the UTF-8 encoding) may have been provided by the plugin author, so
+ * callers must be cautious about parsing it. Callers should parse only the
+ * top-level structure to separate out the keys and values; the contents of the
+ * values is not expected to be machine-readable.
+ */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_EXTENSION_KVS 0x0
+/* Byte position of the data to be returned within the full data block of the
+ * given SUBTYPE.
+ */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_OFST 8
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_IN_OFFSET_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT msgresponse */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMIN 4
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX 252
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_LEN(num) (4+1*(num))
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_NUM(len) (((len)-4)/1)
+/* Full length of the data block of the requested SUBTYPE, in bytes. */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_TOTAL_SIZE_LEN 4
+/* The information requested by SUBTYPE. */
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_OFST 4
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_LEN 1
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MINNUM 0
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM 248
+#define	MC_CMD_PLUGIN_GET_META_PUBLISHER_OUT_DATA_MAXNUM_MCDI2 1016
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_META_MSG
+ * Returns the simple metadata for a specific plugin request message. This
+ * supplies information necessary for the host to know how to build an
+ * MC_CMD_PLUGIN_REQ request.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG 0x1b1
+#define	MC_CMD_PLUGIN_GET_META_MSG_MSGSET 0x1b1
+#undef	MC_CMD_0x1b1_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1b1_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_META_MSG_IN msgrequest */
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_LEN 8
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_HANDLE_LEN 4
+/* Unique message ID to obtain */
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_ID_OFST 4
+#define	MC_CMD_PLUGIN_GET_META_MSG_IN_ID_LEN 4
+
+/* MC_CMD_PLUGIN_GET_META_MSG_OUT msgresponse */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_LEN 44
+/* Unique message ID. This is the same value as the input parameter; it exists
+ * to allow future MCDI extensions which enumerate all messages.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_OFST 0
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_ID_LEN 4
+/* Packed index number of this message, assigned by the MC to give each message
+ * a unique ID in an array to allow for more efficient storage/management.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_OFST 4
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_INDEX_LEN 4
+/* Short human-readable codename for this message. This is conventionally
+ * formatted as a C identifier in the basic ASCII character set with any spare
+ * bytes at the end set to 0, however this convention is not enforced by the MC
+ * so consumers must check for all potential malformations before using it for
+ * a trusted purpose.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_OFST 8
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_NAME_LEN 32
+/* Number of bytes of data which must be passed from the host kernel to the MC
+ * for this message's payload, and which are passed back again in the response.
+ * The MC's plugin metadata loader will have validated that the number of bytes
+ * specified here will fit in to MC_CMD_PLUGIN_REQ_IN_DATA in a single MCDI
+ * message.
+ */
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_OFST 40
+#define	MC_CMD_PLUGIN_GET_META_MSG_OUT_DATA_SIZE_LEN 4
+
 /* PLUGIN_EXTENSION structuredef: Used within MC_CMD_PLUGIN_GET_ALL to describe
  * an individual extension.
  */
@@ -26561,6 +27173,100 @@
 #define	PLUGIN_EXTENSION_RESERVED_LBN 137
 #define	PLUGIN_EXTENSION_RESERVED_WIDTH 23
 
+
+/***********************************/
+/* MC_CMD_PLUGIN_GET_ALL
+ * Returns a list of all plugin extensions currently loaded and available. The
+ * UUIDs returned can be passed to MC_CMD_PLUGIN_ALLOC in order to obtain more
+ * detailed metadata via the MC_CMD_PLUGIN_GET_META_* family of requests. The
+ * ADMIN_GROUP field collects how extensions are grouped in to units which are
+ * loaded/unloaded together; extensions with the same value are in the same
+ * group.
+ */
+#define	MC_CMD_PLUGIN_GET_ALL 0x1b2
+#define	MC_CMD_PLUGIN_GET_ALL_MSGSET 0x1b2
+#undef	MC_CMD_0x1b2_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1b2_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_GET_ALL_IN msgrequest */
+#define	MC_CMD_PLUGIN_GET_ALL_IN_LEN 4
+/* Additional options for querying. Note that if neither FLAG_INCLUDE_ENABLED
+ * nor FLAG_INCLUDE_DISABLED are specified then the result set will be empty.
+ */
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_OFST 0
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAGS_LEN 4
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_OFST 0
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_LBN 0
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_ENABLED_WIDTH 1
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_OFST 0
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_LBN 1
+#define	MC_CMD_PLUGIN_GET_ALL_IN_FLAG_INCLUDE_DISABLED_WIDTH 1
+
+/* MC_CMD_PLUGIN_GET_ALL_OUT msgresponse */
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_LENMIN 0
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX 240
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_LEN(num) (0+20*(num))
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_NUM(len) (((len)-0)/20)
+/* The list of available plugin extensions, as an array of PLUGIN_EXTENSION
+ * structs.
+ */
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_OFST 0
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_LEN 20
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MINNUM 0
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM 12
+#define	MC_CMD_PLUGIN_GET_ALL_OUT_EXTENSIONS_MAXNUM_MCDI2 51
+
+
+/***********************************/
+/* MC_CMD_PLUGIN_REQ
+ * Send a command to a plugin. A plugin may define an arbitrary number of
+ * 'messages' which it allows applications on the host system to send, each
+ * identified by a 32-bit ID.
+ */
+#define	MC_CMD_PLUGIN_REQ 0x1b3
+#define	MC_CMD_PLUGIN_REQ_MSGSET 0x1b3
+#undef	MC_CMD_0x1b3_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1b3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_PLUGIN_REQ_IN msgrequest */
+#define	MC_CMD_PLUGIN_REQ_IN_LENMIN 8
+#define	MC_CMD_PLUGIN_REQ_IN_LENMAX 252
+#define	MC_CMD_PLUGIN_REQ_IN_LENMAX_MCDI2 1020
+#define	MC_CMD_PLUGIN_REQ_IN_LEN(num) (8+1*(num))
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_NUM(len) (((len)-8)/1)
+/* Handle returned by MC_CMD_PLUGIN_ALLOC_OUT */
+#define	MC_CMD_PLUGIN_REQ_IN_HANDLE_OFST 0
+#define	MC_CMD_PLUGIN_REQ_IN_HANDLE_LEN 4
+/* Message ID defined by the plugin author */
+#define	MC_CMD_PLUGIN_REQ_IN_ID_OFST 4
+#define	MC_CMD_PLUGIN_REQ_IN_ID_LEN 4
+/* Data blob being the parameter to the message. This must be of the length
+ * specified by MC_CMD_PLUGIN_GET_META_MSG_IN_MCDI_PARAM_SIZE.
+ */
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_OFST 8
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_LEN 1
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_MINNUM 0
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM 244
+#define	MC_CMD_PLUGIN_REQ_IN_DATA_MAXNUM_MCDI2 1012
+
+/* MC_CMD_PLUGIN_REQ_OUT msgresponse */
+#define	MC_CMD_PLUGIN_REQ_OUT_LENMIN 0
+#define	MC_CMD_PLUGIN_REQ_OUT_LENMAX 252
+#define	MC_CMD_PLUGIN_REQ_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_PLUGIN_REQ_OUT_LEN(num) (0+1*(num))
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_NUM(len) (((len)-0)/1)
+/* The input data, as transformed and/or updated by the plugin's eBPF. Will be
+ * the same size as the input DATA parameter.
+ */
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_OFST 0
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_LEN 1
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_MINNUM 0
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM 252
+#define	MC_CMD_PLUGIN_REQ_OUT_DATA_MAXNUM_MCDI2 1020
+
 /* DESC_ADDR_REGION structuredef: Describes a contiguous region of DESC_ADDR
  * space that maps to a contiguous region of TRGT_ADDR space. Addresses
  * DESC_ADDR in the range [DESC_ADDR_BASE:DESC_ADDR_BASE + 1 <<
@@ -27219,6 +27925,38 @@
 #define	MC_CMD_VIRTIO_TEST_FEATURES_OUT_LEN 0
 
 
+/***********************************/
+/* MC_CMD_VIRTIO_GET_CAPABILITIES
+ * Get virtio capabilities supported by the device. Returns general virtio
+ * capabilities and limitations of the hardware / firmware implementation
+ * (hardware device as a whole), rather than that of individual configured
+ * virtio devices. At present, only the absolute maximum number of queues
+ * allowed on multi-queue devices is returned. Response is expected to be
+ * extended as necessary in the future.
+ */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES 0x1d3
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_MSGSET 0x1d3
+#undef	MC_CMD_0x1d3_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1d3_PRIVILEGE_CTG SRIOV_CTG_GENERAL
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_IN msgrequest */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_IN_LEN 4
+/* Type of device to get capabilities for. Matches the device id as defined by
+ * the virtio spec.
+ */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_OFST 0
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_IN_DEVICE_ID_LEN 4
+/*            Enum values, see field(s): */
+/*               MC_CMD_VIRTIO_GET_FEATURES/MC_CMD_VIRTIO_GET_FEATURES_IN/DEVICE_ID */
+
+/* MC_CMD_VIRTIO_GET_CAPABILITIES_OUT msgresponse */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_LEN 4
+/* Maximum number of queues supported for a single device instance */
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_OFST 0
+#define	MC_CMD_VIRTIO_GET_CAPABILITIES_OUT_MAX_QUEUES_LEN 4
+
+
 /***********************************/
 /* MC_CMD_VIRTIO_INIT_QUEUE
  * Create a virtio virtqueue. Fails with EALREADY if the queue already exists.
@@ -27490,6 +28228,24 @@
 #define	PCIE_FUNCTION_INTF_LBN 32
 #define	PCIE_FUNCTION_INTF_WIDTH 32
 
+/* QUEUE_ID structuredef: Structure representing an absolute queue identifier
+ * (absolute VI number + VI relative queue number). On Keystone, a VI can
+ * contain multiple queues (at present, up to 2), each with separate controls
+ * for direction. This structure is required to uniquely identify the absolute
+ * source queue for descriptor proxy functions.
+ */
+#define	QUEUE_ID_LEN 4
+/* Absolute VI number */
+#define	QUEUE_ID_ABS_VI_OFST 0
+#define	QUEUE_ID_ABS_VI_LEN 2
+#define	QUEUE_ID_ABS_VI_LBN 0
+#define	QUEUE_ID_ABS_VI_WIDTH 16
+/* Relative queue number within the VI */
+#define	QUEUE_ID_REL_QUEUE_LBN 16
+#define	QUEUE_ID_REL_QUEUE_WIDTH 1
+#define	QUEUE_ID_RESERVED_LBN 17
+#define	QUEUE_ID_RESERVED_WIDTH 15
+
 
 /***********************************/
 /* MC_CMD_DESC_PROXY_FUNC_CREATE
@@ -28088,7 +28844,11 @@
  * Enable descriptor proxying for function into target event queue. Returns VI
  * allocation info for the proxy source function, so that the caller can map
  * absolute VI IDs from descriptor proxy events back to the originating
- * function.
+ * function. This is a legacy function that only supports single queue proxy
+ * devices. It is also limited in that it can only be called after host driver
+ * attach (once VI allocation is known) and will return MC_CMD_ERR_ENOTCONN
+ * otherwise. For new code, see MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE which
+ * supports multi-queue devices and has no dependency on host driver attach.
  */
 #define	MC_CMD_DESC_PROXY_FUNC_ENABLE 0x178
 #define	MC_CMD_DESC_PROXY_FUNC_ENABLE_MSGSET 0x178
@@ -28119,9 +28879,46 @@
 #define	MC_CMD_DESC_PROXY_FUNC_ENABLE_OUT_VI_BASE_LEN 4
 
 
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE
+ * Enable descriptor proxying for a source queue on a host function into target
+ * event queue. Source queue number is a relative virtqueue number on the
+ * source function (0 to max_virtqueues-1). For a multi-queue device, the
+ * caller must enable all source queues individually. To retrieve absolute VI
+ * information for the source function (so that VI IDs from descriptor proxy
+ * events can be mapped back to source function / queue) see
+ * MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE 0x1d0
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_MSGSET 0x1d0
+#undef	MC_CMD_0x1d0_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1d0_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN msgrequest */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_LEN 12
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_OFST 0
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to enable proxying on */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+/* Descriptor proxy sink queue (caller function relative). Must be extended
+ * width event queue
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_OFST 8
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_IN_TARGET_EVQ_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT msgresponse */
+#define	MC_CMD_DESC_PROXY_FUNC_ENABLE_QUEUE_OUT_LEN 0
+
+
 /***********************************/
 /* MC_CMD_DESC_PROXY_FUNC_DISABLE
- * Disable descriptor proxying for function
+ * Disable descriptor proxying for function. For multi-queue functions,
+ * disables all queues.
  */
 #define	MC_CMD_DESC_PROXY_FUNC_DISABLE 0x179
 #define	MC_CMD_DESC_PROXY_FUNC_DISABLE_MSGSET 0x179
@@ -28141,6 +28938,77 @@
 #define	MC_CMD_DESC_PROXY_FUNC_DISABLE_OUT_LEN 0
 
 
+/***********************************/
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE
+ * Disable descriptor proxying for a specific source queue on a function.
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE 0x1d1
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_MSGSET 0x1d1
+#undef	MC_CMD_0x1d1_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1d1_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN msgrequest */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_LEN 8
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_OFST 0
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_HANDLE_LEN 4
+/* Source relative queue number to disable proxying on */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_OFST 4
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_IN_SOURCE_QUEUE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT msgresponse */
+#define	MC_CMD_DESC_PROXY_FUNC_DISABLE_QUEUE_OUT_LEN 0
+
+
+/***********************************/
+/* MC_CMD_DESC_PROXY_GET_VI_INFO
+ * Returns absolute VI allocation information for the descriptor proxy source
+ * function referenced by HANDLE, so that the caller can map absolute VI IDs
+ * from descriptor proxy events back to the originating function and queue. The
+ * call is only valid after the host driver for the source function has
+ * attached (after receiving a driver attach event for the descriptor proxy
+ * function) and will fail with ENOTCONN otherwise.
+ */
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO 0x1d2
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO_MSGSET 0x1d2
+#undef	MC_CMD_0x1d2_PRIVILEGE_CTG
+
+#define	MC_CMD_0x1d2_PRIVILEGE_CTG SRIOV_CTG_ADMIN
+
+/* MC_CMD_DESC_PROXY_GET_VI_INFO_IN msgrequest */
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO_IN_LEN 4
+/* Handle to descriptor proxy function (as returned by
+ * MC_CMD_DESC_PROXY_FUNC_OPEN)
+ */
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_OFST 0
+#define	MC_CMD_DESC_PROXY_GET_VI_INFO_IN_HANDLE_LEN 4
+
+/* MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT msgresponse */
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMIN 0
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX 252
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LENMAX_MCDI2 1020
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_LEN(num) (0+4*(num))
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_NUM(len) (((len)-0)/4)
+/* VI information (VI ID + VI relative queue number) for each of the source
+ * queues (in order from 0 to max_virtqueues-1), as array of QUEUE_ID
+ * structures.
+ */
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_OFST 0
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_LEN 4
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MINNUM 0
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM 63
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_MAXNUM_MCDI2 255
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_OFST 0
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_ABS_VI_LEN 2
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_LBN 16
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_REL_QUEUE_WIDTH 1
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_LBN 17
+#define	MC_CMD_DESC_PROXY_FUNC_GET_VI_INFO_OUT_VI_MAP_RESERVED_WIDTH 15
+
+
 /***********************************/
 /* MC_CMD_GET_ADDR_SPC_ID
  * Get Address space identifier for use in mem2mem descriptors for a given
@@ -29384,9 +30252,12 @@
 #define	MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_OFST 4
 #define	MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_LBN 3
 #define	MC_CMD_MAE_GET_CAPS_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
-/* The total number of counters available to allocate. */
+/* Deprecated alias for AR_COUNTERS. */
 #define	MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_OFST 8
 #define	MC_CMD_MAE_GET_CAPS_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_OFST 8
+#define	MC_CMD_MAE_GET_CAPS_OUT_AR_COUNTERS_LEN 4
 /* The total number of counters lists available to allocate. A value of zero
  * indicates that counter lists are not supported by the NIC. (But single
  * counters may still be.)
@@ -29429,6 +30300,87 @@
 #define	MC_CMD_MAE_GET_CAPS_OUT_API_VER_OFST 48
 #define	MC_CMD_MAE_GET_CAPS_OUT_API_VER_LEN 4
 
+/* MC_CMD_MAE_GET_CAPS_V2_OUT msgresponse */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_LEN 60
+/* The number of field IDs that the NIC supports. Any field with a ID greater
+ * than or equal to the value returned in this field must be treated as having
+ * a support level of MAE_FIELD_UNSUPPORTED in all requests.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_OFST 0
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_MATCH_FIELD_COUNT_LEN 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPES_SUPPORTED_LEN 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_LBN 0
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_VXLAN_WIDTH 1
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_LBN 1
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_NVGRE_WIDTH 1
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_LBN 2
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_GENEVE_WIDTH 1
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_OFST 4
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_LBN 3
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_TYPE_L2GRE_WIDTH 1
+/* Deprecated alias for AR_COUNTERS. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_OFST 8
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTERS_LEN 4
+/* The total number of AR counters available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_OFST 8
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_AR_COUNTERS_LEN 4
+/* The total number of counters lists available to allocate. A value of zero
+ * indicates that counter lists are not supported by the NIC. (But single
+ * counters may still be.)
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_OFST 12
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_LISTS_LEN 4
+/* The total number of encap header structures available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_OFST 16
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ENCAP_HEADER_LIMIT_LEN 4
+/* Reserved. Should be zero. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_OFST 20
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_RSVD_LEN 4
+/* The total number of action sets available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_OFST 24
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SETS_LEN 4
+/* The total number of action set lists available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_OFST 28
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_SET_LISTS_LEN 4
+/* The total number of outer rules available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_OFST 32
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_RULES_LEN 4
+/* The total number of action rules available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_OFST 36
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_RULES_LEN 4
+/* The number of priorities available for ACTION_RULE filters. It is invalid to
+ * install a MATCH_ACTION filter with a priority number >= ACTION_PRIOS.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_OFST 40
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_ACTION_PRIOS_LEN 4
+/* The number of priorities available for OUTER_RULE filters. It is invalid to
+ * install an OUTER_RULE filter with a priority number >= OUTER_PRIOS.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_OFST 44
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_OUTER_PRIOS_LEN 4
+/* MAE API major version. Currently 1. If this field is not present in the
+ * response (i.e. response shorter than 384 bits), then its value is zero. If
+ * the value does not match the client's expectations, the client should raise
+ * a fatal error.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_OFST 48
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_API_VER_LEN 4
+/* Mask of supported counter types. Each bit position corresponds to a value of
+ * the MAE_COUNTER_TYPE enum. If this field is missing (i.e. V1 response),
+ * clients must assume that only AR counters are supported (i.e.
+ * COUNTER_TYPES_SUPPORTED==0x1). See also
+ * MC_CMD_MAE_COUNTERS_STREAM_START/COUNTER_TYPES_MASK.
+ */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_OFST 52
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_COUNTER_TYPES_SUPPORTED_LEN 4
+/* The total number of conntrack counters available to allocate. */
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_OFST 56
+#define	MC_CMD_MAE_GET_CAPS_V2_OUT_CT_COUNTERS_LEN 4
+
 
 /***********************************/
 /* MC_CMD_MAE_GET_AR_CAPS
@@ -29495,8 +30447,8 @@
 
 /***********************************/
 /* MC_CMD_MAE_COUNTER_ALLOC
- * Allocate match-action-engine counters, which can be referenced in Action
- * Rules.
+ * Allocate match-action-engine counters, which can be referenced in various
+ * tables.
  */
 #define	MC_CMD_MAE_COUNTER_ALLOC 0x143
 #define	MC_CMD_MAE_COUNTER_ALLOC_MSGSET 0x143
@@ -29504,12 +30456,25 @@
 
 #define	MC_CMD_0x143_PRIVILEGE_CTG SRIOV_CTG_MAE
 
-/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_ALLOC_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
 #define	MC_CMD_MAE_COUNTER_ALLOC_IN_LEN 4
 /* The number of counters that the driver would like allocated */
 #define	MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_OFST 0
 #define	MC_CMD_MAE_COUNTER_ALLOC_IN_REQUESTED_COUNT_LEN 4
 
+/* MC_CMD_MAE_COUNTER_ALLOC_V2_IN msgrequest */
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_LEN 8
+/* The number of counters that the driver would like allocated */
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_OFST 0
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_REQUESTED_COUNT_LEN 4
+/* Which type of counter to allocate. */
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_OFST 4
+#define	MC_CMD_MAE_COUNTER_ALLOC_V2_IN_COUNTER_TYPE_LEN 4
+/*            Enum values, see field(s): */
+/*               MAE_COUNTER_TYPE */
+
 /* MC_CMD_MAE_COUNTER_ALLOC_OUT msgresponse */
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMIN 12
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_LENMAX 252
@@ -29518,7 +30483,8 @@
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_COUNTER_ID_NUM(len) (((len)-8)/4)
 /* Generation count. Packets with generation count >= GENERATION_COUNT will
  * contain valid counter values for counter IDs allocated in this call, unless
- * the counter values are zero and zero squash is enabled.
+ * the counter values are zero and zero squash is enabled. Note that there is
+ * an independent GENERATION_COUNT object per counter type.
  */
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_OFST 0
 #define	MC_CMD_MAE_COUNTER_ALLOC_OUT_GENERATION_COUNT_LEN 4
@@ -29548,7 +30514,9 @@
 
 #define	MC_CMD_0x144_PRIVILEGE_CTG SRIOV_CTG_MAE
 
-/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest */
+/* MC_CMD_MAE_COUNTER_FREE_IN msgrequest: Using this is equivalent to using V2
+ * with COUNTER_TYPE=AR.
+ */
 #define	MC_CMD_MAE_COUNTER_FREE_IN_LENMIN 8
 #define	MC_CMD_MAE_COUNTER_FREE_IN_LENMAX 132
 #define	MC_CMD_MAE_COUNTER_FREE_IN_LENMAX_MCDI2 132
@@ -29564,6 +30532,23 @@
 #define	MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM 32
 #define	MC_CMD_MAE_COUNTER_FREE_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
 
+/* MC_CMD_MAE_COUNTER_FREE_V2_IN msgrequest */
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_LEN 136
+/* The number of counter IDs to be freed. */
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_OFST 0
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_ID_COUNT_LEN 4
+/* An array containing the counter IDs to be freed. */
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_OFST 4
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_LEN 4
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MINNUM 1
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM 32
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_FREE_COUNTER_ID_MAXNUM_MCDI2 32
+/* Which type of counter to free. */
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_OFST 132
+#define	MC_CMD_MAE_COUNTER_FREE_V2_IN_COUNTER_TYPE_LEN 4
+/*            Enum values, see field(s): */
+/*               MAE_COUNTER_TYPE */
+
 /* MC_CMD_MAE_COUNTER_FREE_OUT msgresponse */
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_LENMIN 12
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_LENMAX 136
@@ -29572,11 +30557,13 @@
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_FREED_COUNTER_ID_NUM(len) (((len)-8)/4)
 /* Generation count. A packet with generation count == GENERATION_COUNT will
  * contain the final values for these counter IDs, unless the counter values
- * are zero and zero squash is enabled. Receiving a packet with generation
- * count > GENERATION_COUNT guarantees that no more values will be written for
- * these counters. If values for these counter IDs are present, the counter ID
- * has been reallocated. A counter ID will not be reallocated within a single
- * read cycle as this would merge increments from the 'old' and 'new' counters.
+ * are zero and zero squash is enabled. Note that the GENERATION_COUNT value is
+ * specific to the COUNTER_TYPE (IDENTIFIER field in packet header). Receiving
+ * a packet with generation count > GENERATION_COUNT guarantees that no more
+ * values will be written for these counters. If values for these counter IDs
+ * are present, the counter ID has been reallocated. A counter ID will not be
+ * reallocated within a single read cycle as this would merge increments from
+ * the 'old' and 'new' counters.
  */
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_OFST 0
 #define	MC_CMD_MAE_COUNTER_FREE_OUT_GENERATION_COUNT_LEN 4
@@ -29616,7 +30603,9 @@
 
 #define	MC_CMD_0x151_PRIVILEGE_CTG SRIOV_CTG_MAE
 
-/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest */
+/* MC_CMD_MAE_COUNTERS_STREAM_START_IN msgrequest: Using V1 is equivalent to V2
+ * with COUNTER_TYPES_MASK=0x1 (i.e. AR counters only).
+ */
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_IN_LEN 8
 /* The RxQ to write packets to. */
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_IN_QID_OFST 0
@@ -29634,6 +30623,35 @@
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_LBN 1
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_IN_COUNTER_STALL_EN_WIDTH 1
 
+/* MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN msgrequest */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_LEN 12
+/* The RxQ to write packets to. */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_OFST 0
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_QID_LEN 2
+/* Maximum size in bytes of packets that may be written to the RxQ. */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_OFST 2
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_PACKET_SIZE_LEN 2
+/* Optional flags. */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_OFST 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_FLAGS_LEN 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_OFST 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_LBN 0
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_ZERO_SQUASH_DISABLE_WIDTH 1
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_OFST 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_LBN 1
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_STALL_EN_WIDTH 1
+/* Mask of which counter types should be reported. Each bit position
+ * corresponds to a value of the MAE_COUNTER_TYPE enum. For example a value of
+ * 0x3 requests both AR and CT counters. A value of zero is invalid. Counter
+ * types not selected by the mask value won't be included in the stream. If a
+ * client wishes to change which counter types are reported, it must first call
+ * MAE_COUNTERS_STREAM_STOP, then restart it with the new mask value.
+ * Requesting a counter type which isn't supported by firmware (reported in
+ * MC_CMD_MAE_GET_CAPS/COUNTER_TYPES_SUPPORTED) will result in ENOTSUP.
+ */
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_OFST 8
+#define	MC_CMD_MAE_COUNTERS_STREAM_START_V2_IN_COUNTER_TYPES_MASK_LEN 4
+
 /* MC_CMD_MAE_COUNTERS_STREAM_START_OUT msgresponse */
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_OUT_LEN 4
 #define	MC_CMD_MAE_COUNTERS_STREAM_START_OUT_FLAGS_OFST 0
@@ -29661,14 +30679,32 @@
 
 /* MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT msgresponse */
 #define	MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_LEN 4
-/* Generation count. The final set of counter values will be written out in
- * packets with count == GENERATION_COUNT. An empty packet with count >
- * GENERATION_COUNT indicates that no more counter values will be written to
- * this stream.
+/* Generation count for AR counters. The final set of AR counter values will be
+ * written out in packets with count == GENERATION_COUNT. An empty packet with
+ * count > GENERATION_COUNT indicates that no more counter values of this type
+ * will be written to this stream.
  */
 #define	MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_OFST 0
 #define	MC_CMD_MAE_COUNTERS_STREAM_STOP_OUT_GENERATION_COUNT_LEN 4
 
+/* MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT msgresponse */
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMIN 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX 32
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LENMAX_MCDI2 32
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_LEN(num) (0+4*(num))
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_NUM(len) (((len)-0)/4)
+/* Array of generation counts, indexed by MAE_COUNTER_TYPE. Note that since
+ * MAE_COUNTER_TYPE_AR==0, this response is backwards-compatible with V1. The
+ * final set of counter values will be written out in packets with count ==
+ * GENERATION_COUNT. An empty packet with count > GENERATION_COUNT indicates
+ * that no more counter values of this type will be written to this stream.
+ */
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_OFST 0
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_LEN 4
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MINNUM 1
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM 8
+#define	MC_CMD_MAE_COUNTERS_STREAM_STOP_V2_OUT_GENERATION_COUNT_MAXNUM_MCDI2 8
+
 
 /***********************************/
 /* MC_CMD_MAE_COUNTERS_STREAM_GIVE_CREDITS
@@ -29941,9 +30977,10 @@
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_LIST_ID_LEN 4
 /* If a driver only wished to update one counter within this action set, then
  * it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
  */
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_OFST 28
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_IN_COUNTER_ID_LEN 4
@@ -30021,9 +31058,10 @@
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_LIST_ID_LEN 4
 /* If a driver only wished to update one counter within this action set, then
  * it can supply a COUNTER_ID instead of allocating a single-element counter
- * list. This field should be set to COUNTER_ID_NULL if this behaviour is not
- * required. It is not valid to supply a non-NULL value for both
- * COUNTER_LIST_ID and COUNTER_ID.
+ * list. The ID must have been allocated with COUNTER_TYPE=AR. This field
+ * should be set to COUNTER_ID_NULL if this behaviour is not required. It is
+ * not valid to supply a non-NULL value for both COUNTER_LIST_ID and
+ * COUNTER_ID.
  */
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_OFST 28
 #define	MC_CMD_MAE_ACTION_SET_ALLOC_V2_IN_COUNTER_ID_LEN 4
@@ -30352,7 +31390,8 @@
 #define	MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_LBN 64
 #define	MAE_ACTION_RULE_RESPONSE_LOOKUP_CONTROL_WIDTH 32
 /* Counter ID to increment if DO_CT or DO_RECIRC is set. Must be set to
- * COUNTER_ID_NULL otherwise.
+ * COUNTER_ID_NULL otherwise. Counter ID must have been allocated with
+ * COUNTER_TYPE=AR.
  */
 #define	MAE_ACTION_RULE_RESPONSE_COUNTER_ID_OFST 12
 #define	MAE_ACTION_RULE_RESPONSE_COUNTER_ID_LEN 4
@@ -30710,6 +31749,108 @@
 #define	MAE_MPORT_DESC_VNIC_PLUGIN_TBD_LBN 352
 #define	MAE_MPORT_DESC_VNIC_PLUGIN_TBD_WIDTH 32
 
+/* MAE_MPORT_DESC_V2 structuredef */
+#define	MAE_MPORT_DESC_V2_LEN 56
+#define	MAE_MPORT_DESC_V2_MPORT_ID_OFST 0
+#define	MAE_MPORT_DESC_V2_MPORT_ID_LEN 4
+#define	MAE_MPORT_DESC_V2_MPORT_ID_LBN 0
+#define	MAE_MPORT_DESC_V2_MPORT_ID_WIDTH 32
+/* Reserved for future purposes, contains information independent of caller */
+#define	MAE_MPORT_DESC_V2_FLAGS_OFST 4
+#define	MAE_MPORT_DESC_V2_FLAGS_LEN 4
+#define	MAE_MPORT_DESC_V2_FLAGS_LBN 32
+#define	MAE_MPORT_DESC_V2_FLAGS_WIDTH 32
+#define	MAE_MPORT_DESC_V2_CALLER_FLAGS_OFST 8
+#define	MAE_MPORT_DESC_V2_CALLER_FLAGS_LEN 4
+#define	MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_OFST 8
+#define	MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_LBN 0
+#define	MAE_MPORT_DESC_V2_CAN_RECEIVE_ON_WIDTH 1
+#define	MAE_MPORT_DESC_V2_CAN_DELIVER_TO_OFST 8
+#define	MAE_MPORT_DESC_V2_CAN_DELIVER_TO_LBN 1
+#define	MAE_MPORT_DESC_V2_CAN_DELIVER_TO_WIDTH 1
+#define	MAE_MPORT_DESC_V2_CAN_DELETE_OFST 8
+#define	MAE_MPORT_DESC_V2_CAN_DELETE_LBN 2
+#define	MAE_MPORT_DESC_V2_CAN_DELETE_WIDTH 1
+#define	MAE_MPORT_DESC_V2_IS_ZOMBIE_OFST 8
+#define	MAE_MPORT_DESC_V2_IS_ZOMBIE_LBN 3
+#define	MAE_MPORT_DESC_V2_IS_ZOMBIE_WIDTH 1
+#define	MAE_MPORT_DESC_V2_CALLER_FLAGS_LBN 64
+#define	MAE_MPORT_DESC_V2_CALLER_FLAGS_WIDTH 32
+/* Not the ideal name; it's really the type of thing connected to the m-port */
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_OFST 12
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_LEN 4
+/* enum: Connected to a MAC... */
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_NET_PORT 0x0
+/* enum: Adds metadata and delivers to another m-port */
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_ALIAS 0x1
+/* enum: Connected to a VNIC. */
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_VNIC 0x2
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_LBN 96
+#define	MAE_MPORT_DESC_V2_MPORT_TYPE_WIDTH 32
+/* 128-bit value available to drivers for m-port identification. */
+#define	MAE_MPORT_DESC_V2_UUID_OFST 16
+#define	MAE_MPORT_DESC_V2_UUID_LEN 16
+#define	MAE_MPORT_DESC_V2_UUID_LBN 128
+#define	MAE_MPORT_DESC_V2_UUID_WIDTH 128
+/* Big wadge of space reserved for other common properties */
+#define	MAE_MPORT_DESC_V2_RESERVED_OFST 32
+#define	MAE_MPORT_DESC_V2_RESERVED_LEN 8
+#define	MAE_MPORT_DESC_V2_RESERVED_LO_OFST 32
+#define	MAE_MPORT_DESC_V2_RESERVED_LO_LEN 4
+#define	MAE_MPORT_DESC_V2_RESERVED_LO_LBN 256
+#define	MAE_MPORT_DESC_V2_RESERVED_LO_WIDTH 32
+#define	MAE_MPORT_DESC_V2_RESERVED_HI_OFST 36
+#define	MAE_MPORT_DESC_V2_RESERVED_HI_LEN 4
+#define	MAE_MPORT_DESC_V2_RESERVED_HI_LBN 288
+#define	MAE_MPORT_DESC_V2_RESERVED_HI_WIDTH 32
+#define	MAE_MPORT_DESC_V2_RESERVED_LBN 256
+#define	MAE_MPORT_DESC_V2_RESERVED_WIDTH 64
+/* Logical port index. Only valid when type NET Port. */
+#define	MAE_MPORT_DESC_V2_NET_PORT_IDX_OFST 40
+#define	MAE_MPORT_DESC_V2_NET_PORT_IDX_LEN 4
+#define	MAE_MPORT_DESC_V2_NET_PORT_IDX_LBN 320
+#define	MAE_MPORT_DESC_V2_NET_PORT_IDX_WIDTH 32
+/* The m-port delivered to */
+#define	MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_OFST 40
+#define	MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LEN 4
+#define	MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_LBN 320
+#define	MAE_MPORT_DESC_V2_ALIAS_DELIVER_MPORT_ID_WIDTH 32
+/* The type of thing that owns the VNIC */
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_OFST 40
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LEN 4
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_FUNCTION 0x1 /* enum */
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_PLUGIN 0x2 /* enum */
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_LBN 320
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_TYPE_WIDTH 32
+/* The PCIe interface on which the function lives. CJK: We need an enumeration
+ * of interfaces that we extend as new interface (types) appear. This belongs
+ * elsewhere and should be referenced from here
+ */
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_OFST 44
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LEN 4
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_LBN 352
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_INTERFACE_WIDTH 32
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_OFST 48
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LEN 2
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_LBN 384
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_PF_IDX_WIDTH 16
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_OFST 50
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LEN 2
+/* enum: Indicates that the function is a PF */
+#define	MAE_MPORT_DESC_V2_VF_IDX_NULL 0xffff
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_LBN 400
+#define	MAE_MPORT_DESC_V2_VNIC_FUNCTION_VF_IDX_WIDTH 16
+/* Reserved. Should be ignored for now. */
+#define	MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_OFST 44
+#define	MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LEN 4
+#define	MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_LBN 352
+#define	MAE_MPORT_DESC_V2_VNIC_PLUGIN_TBD_WIDTH 32
+/* A client handle for the VNIC's owner. Only valid for type VNIC. */
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_OFST 52
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LEN 4
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_LBN 416
+#define	MAE_MPORT_DESC_V2_VNIC_CLIENT_HANDLE_WIDTH 32
+
 
 /***********************************/
 /* MC_CMD_MAE_MPORT_ENUMERATE
-- 
2.30.2


^ permalink raw reply	[relevance 2%]

* Re: [dpdk-dev] [PATCH v2] ethdev: fix representor port ID search by name
  2021-08-18 14:00  3% ` [dpdk-dev] [PATCH v2] " Andrew Rybchenko
@ 2021-08-27  9:18  0%   ` Xueming(Steven) Li
  0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-08-27  9:18 UTC (permalink / raw)
  To: Andrew Rybchenko, Ajit Khaparde, Somnath Kotur, John Daley,
	Hyong Youb Kim, Beilei Xing, Qiming Yang, Qi Zhang, Haiyue Wang,
	Matan Azrad, Shahaf Shuler, Slava Ovsiienko,
	NBU-Contact-Thomas Monjalon, Ferruh Yigit
  Cc: dev, Viacheslav Galaktionov



> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Wednesday, August 18, 2021 10:00 PM
> To: Ajit Khaparde <ajit.khaparde@broadcom.com>; Somnath Kotur <somnath.kotur@broadcom.com>; John Daley
> <johndale@cisco.com>; Hyong Youb Kim <hyonkim@cisco.com>; Beilei Xing <beilei.xing@intel.com>; Qiming Yang
> <qiming.yang@intel.com>; Qi Zhang <qi.z.zhang@intel.com>; Haiyue Wang <haiyue.wang@intel.com>; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>
> Cc: dev@dpdk.org; Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>; Xueming(Steven) Li <xuemingl@nvidia.com>
> Subject: [PATCH v2] ethdev: fix representor port ID search by name
> 
> From: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> 
> Getting a list of representors from a representor does not make sense.
> Instead, a parent device should be used.
> 
> To this end, extend the rte_eth_dev_data structure to include the port ID of the parent device for representors.
> 
> Signed-off-by: Viacheslav Galaktionov <viacheslav.galaktionov@oktetlabs.ru>
> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> ---
> The new field is added into the hole in rte_eth_dev_data structure.
> The patch does not change ABI, but extra care is required since ABI check is disabled for the structure because of the libabigail bug [1].
> 
> Potentially it is bad for out-of-tree drivers which implement representors but do not fill in a new parert_port_id field in
> rte_eth_dev_data structure. Do we care?
> 
> May be the patch should add lines to release notes, but I'd like to get initial feedback first.
> 
> mlx5 changes should be reviwed by maintainers very carefully, since we are not sure if we patch it correctly.
> 
> [1] https://sourceware.org/bugzilla/show_bug.cgi?id=28060
> 
>  drivers/net/bnxt/bnxt_reps.c             |  1 +
>  drivers/net/enic/enic_vf_representor.c   |  1 +
>  drivers/net/i40e/i40e_vf_representor.c   |  1 +
>  drivers/net/ice/ice_dcf_vf_representor.c |  1 +  drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
>  drivers/net/mlx5/linux/mlx5_os.c         | 17 +++++++++++++++++
>  drivers/net/mlx5/windows/mlx5_os.c       | 17 +++++++++++++++++
>  lib/ethdev/ethdev_driver.h               |  6 +++---
>  lib/ethdev/rte_class_eth.c               | 22 ++++++++++++++++++++--
>  lib/ethdev/rte_ethdev.c                  |  8 ++++----
>  lib/ethdev/rte_ethdev_core.h             |  4 ++++
>  11 files changed, 70 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c index bdbad53b7d..902591cd39 100644
> --- a/drivers/net/bnxt/bnxt_reps.c
> +++ b/drivers/net/bnxt/bnxt_reps.c
> @@ -187,6 +187,7 @@ int bnxt_representor_init(struct rte_eth_dev *eth_dev, void *params)
>  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	eth_dev->data->representor_id = rep_params->vf_id;
> +	eth_dev->data->parent_port_id = rep_params->parent_dev->data->port_id;
> 
>  	rte_eth_random_addr(vf_rep_bp->dflt_mac_addr);
>  	memcpy(vf_rep_bp->mac_addr, vf_rep_bp->dflt_mac_addr, diff --git a/drivers/net/enic/enic_vf_representor.c
> b/drivers/net/enic/enic_vf_representor.c
> index 79dd6e5640..6ee7967ce9 100644
> --- a/drivers/net/enic/enic_vf_representor.c
> +++ b/drivers/net/enic/enic_vf_representor.c
> @@ -662,6 +662,7 @@ int enic_vf_representor_init(struct rte_eth_dev *eth_dev, void *init_params)
>  	eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	eth_dev->data->representor_id = vf->vf_id;
> +	eth_dev->data->parent_port_id = pf->port_id;
>  	eth_dev->data->mac_addrs = rte_zmalloc("enic_mac_addr_vf",
>  		sizeof(struct rte_ether_addr) *
>  		ENIC_UNICAST_PERFECT_FILTERS, 0);
> diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
> index 0481b55381..865b637585 100644
> --- a/drivers/net/i40e/i40e_vf_representor.c
> +++ b/drivers/net/i40e/i40e_vf_representor.c
> @@ -514,6 +514,7 @@ i40e_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
>  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR |
>  					RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
>  	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->parent_port_id = pf->dev_data->parent_port_id;
> 
>  	/* Setting the number queues allocated to the VF */
>  	ethdev->data->nb_rx_queues = vf->vsi->nb_qps; diff --git a/drivers/net/ice/ice_dcf_vf_representor.c
> b/drivers/net/ice/ice_dcf_vf_representor.c
> index 970461f3e9..c7cd3fd290 100644
> --- a/drivers/net/ice/ice_dcf_vf_representor.c
> +++ b/drivers/net/ice/ice_dcf_vf_representor.c
> @@ -418,6 +418,7 @@ ice_dcf_vf_repr_init(struct rte_eth_dev *vf_rep_eth_dev, void *init_param)
> 
>  	vf_rep_eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  	vf_rep_eth_dev->data->representor_id = repr->vf_id;
> +	vf_rep_eth_dev->data->parent_port_id =
> +repr->dcf_eth_dev->data->port_id;
> 
>  	vf_rep_eth_dev->data->mac_addrs = &repr->mac_addr;
> 
> diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
> index d5b636a194..7a2063849e 100644
> --- a/drivers/net/ixgbe/ixgbe_vf_representor.c
> +++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
> @@ -197,6 +197,7 @@ ixgbe_vf_representor_init(struct rte_eth_dev *ethdev, void *init_params)
> 
>  	ethdev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  	ethdev->data->representor_id = representor->vf_id;
> +	ethdev->data->parent_port_id = representor->pf_ethdev->data->port_id;
> 
>  	/* Set representor device ops */
>  	ethdev->dev_ops = &ixgbe_vf_representor_dev_ops; diff --git a/drivers/net/mlx5/linux/mlx5_os.c
> b/drivers/net/mlx5/linux/mlx5_os.c
> index 5f8766aa48..a68fa7beb7 100644
> --- a/drivers/net/mlx5/linux/mlx5_os.c
> +++ b/drivers/net/mlx5/linux/mlx5_os.c
> @@ -1677,6 +1677,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>  	if (priv->representor) {
>  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
> +			struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +			if (opriv &&
> +			    opriv->master &&
> +			    opriv->domain_id == priv->domain_id &&
> +			    opriv->sh == priv->sh) {
> +				eth_dev->data->parent_port_id =
> +					rte_eth_devices[port_id].data->port_id;
> +				break;
> +			}
> +		}
> +		if (port_id >= RTE_MAX_ETHPORTS) {
> +			DRV_LOG(ERR, "no master device for representor");
> +			err = ENODEV;
> +			goto error;
> +		}
>  	}
>  	priv->mp_id.port_id = eth_dev->data->port_id;
>  	strlcpy(priv->mp_id.name, MLX5_MP_NAME, RTE_MP_MAX_NAME_LEN); diff --git a/drivers/net/mlx5/windows/mlx5_os.c
> b/drivers/net/mlx5/windows/mlx5_os.c
> index 7e1df1c751..0c5a02bfcb 100644
> --- a/drivers/net/mlx5/windows/mlx5_os.c
> +++ b/drivers/net/mlx5/windows/mlx5_os.c
> @@ -543,6 +543,23 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
>  	if (priv->representor) {
>  		eth_dev->data->dev_flags |= RTE_ETH_DEV_REPRESENTOR;
>  		eth_dev->data->representor_id = priv->representor_id;
> +		MLX5_ETH_FOREACH_DEV(port_id, priv->pci_dev) {
> +			struct mlx5_priv *opriv =
> +				rte_eth_devices[port_id].data->dev_private;
> +			if (opriv &&
> +			    opriv->master &&
> +			    opriv->domain_id == priv->domain_id &&
> +			    opriv->sh == priv->sh) {
> +				eth_dev->data->parent_port_id =
> +					rte_eth_devices[port_id].data->port_id;

Could this value different than port_id?

> +				break;
> +			}
> +		}
> +		if (port_id >= RTE_MAX_ETHPORTS) {
> +			DRV_LOG(ERR, "no master device for representor");
> +			err = ENODEV;
> +			goto error;

Here shouldn't be an error.
Parent port ID default to 0, it could be wrong if multiple PF probed, let's default to current port ID.

> +		}
>  	}
>  	/*
>  	 * Store associated network device interface index. This index diff --git a/lib/ethdev/ethdev_driver.h
> b/lib/ethdev/ethdev_driver.h index fd5b7ca550..d1a1499538 100644
> --- a/lib/ethdev/ethdev_driver.h
> +++ b/lib/ethdev/ethdev_driver.h
> @@ -1287,8 +1287,8 @@ struct rte_eth_devargs {
>   * For backward compatibility, if no representor info, direct
>   * map legacy VF (no controller and pf).
>   *
> - * @param ethdev
> - *  Handle of ethdev port.
> + * @param port_id
> + *  Port ID of the backing device.
>   * @param type
>   *  Representor type.
>   * @param controller
> @@ -1305,7 +1305,7 @@ struct rte_eth_devargs {
>   */
>  __rte_internal
>  int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t port_id,
>  			   enum rte_eth_representor_type type,
>  			   int controller, int pf, int representor_port,
>  			   uint16_t *repr_id);
> diff --git a/lib/ethdev/rte_class_eth.c b/lib/ethdev/rte_class_eth.c index 1fe5fa1f36..167d2d798c 100644
> --- a/lib/ethdev/rte_class_eth.c
> +++ b/lib/ethdev/rte_class_eth.c
> @@ -95,14 +95,32 @@ eth_representor_cmp(const char *key __rte_unused,
>  		c = i / (np * nf);
>  		p = (i / nf) % np;
>  		f = i % nf;
> -		if (rte_eth_representor_id_get(edev,
> +		/*
> +		 * rte_eth_representor_id_get expects to receive port ID of
> +		 * the master device, but in order to maintain compatibility
> +		 * with mlx5's hardware bonding and legacy representor
> +		 * specification using just VF numbers, the representor's port
> +		 * ID is tried first.
> +		 */
> +		ret = rte_eth_representor_id_get(edev->data->port_id,
>  			eth_da.type,
>  			eth_da.nb_mh_controllers == 0 ? -1 :
>  					eth_da.mh_controllers[c],
>  			eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
>  			eth_da.nb_representor_ports == 0 ? -1 :
>  					eth_da.representor_ports[f],
> -			&id) < 0)
> +			&id);
> +		if (ret == -ENOTSUP)
> +			ret = rte_eth_representor_id_get(
> +				edev->data->parent_port_id,
> +				eth_da.type,
> +				eth_da.nb_mh_controllers == 0 ? -1 :
> +						eth_da.mh_controllers[c],
> +				eth_da.nb_ports == 0 ? -1 : eth_da.ports[p],
> +				eth_da.nb_representor_ports == 0 ? -1 :
> +						eth_da.representor_ports[f],
> +				&id);
> +		if (ret < 0)
>  			continue;
>  		if (data->representor_id == id)
>  			return 0;
> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 9d95cd11e1..228ef7bf23 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -5997,7 +5997,7 @@ rte_eth_devargs_parse(const char *dargs, struct rte_eth_devargs *eth_da)  }
> 
>  int
> -rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
> +rte_eth_representor_id_get(uint16_t port_id,
>  			   enum rte_eth_representor_type type,
>  			   int controller, int pf, int representor_port,
>  			   uint16_t *repr_id)
> @@ -6013,7 +6013,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>  		return -EINVAL;
> 
>  	/* Get PMD representor range info. */
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, NULL);
> +	ret = rte_eth_representor_info_get(port_id, NULL);
>  	if (ret == -ENOTSUP && type == RTE_ETH_REPRESENTOR_VF &&
>  	    controller == -1 && pf == -1) {
>  		/* Direct mapping for legacy VF representor. */ @@ -6028,7 +6028,7 @@ rte_eth_representor_id_get(const struct
> rte_eth_dev *ethdev,
>  	if (info == NULL)
>  		return -ENOMEM;
>  	info->nb_ranges_alloc = n;
> -	ret = rte_eth_representor_info_get(ethdev->data->port_id, info);
> +	ret = rte_eth_representor_info_get(port_id, info);
>  	if (ret < 0)
>  		goto out;
> 
> @@ -6047,7 +6047,7 @@ rte_eth_representor_id_get(const struct rte_eth_dev *ethdev,
>  			continue;
>  		if (info->ranges[i].id_end < info->ranges[i].id_base) {
>  			RTE_LOG(WARNING, EAL, "Port %hu invalid representor ID Range %u - %u, entry %d\n",
> -				ethdev->data->port_id, info->ranges[i].id_base,
> +				port_id, info->ranges[i].id_base,
>  				info->ranges[i].id_end, i);
>  			continue;
> 
> diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index edf96de2dc..13cb84b52f 100644
> --- a/lib/ethdev/rte_ethdev_core.h
> +++ b/lib/ethdev/rte_ethdev_core.h
> @@ -185,6 +185,10 @@ struct rte_eth_dev_data {
>  			/**< Switch-specific identifier.
>  			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
>  			 */
> +	uint16_t parent_port_id;
> +			/**< Port ID of the backing device.
> +			 *   Valid if RTE_ETH_DEV_REPRESENTOR in dev_flags.
> +			 */
> 
>  	pthread_mutex_t flow_ops_mutex; /**< rte_flow ops mutex. */
>  	uint64_t reserved_64s[4]; /**< Reserved for future fields */
> --
> 2.30.2


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue
  2021-08-26 11:58  4%               ` Jerin Jacob
@ 2021-08-28 14:16  0%                 ` Xueming(Steven) Li
  2021-08-30  9:31  3%                   ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Xueming(Steven) Li @ 2021-08-28 14:16 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Ferruh Yigit, NBU-Contact-Thomas Monjalon, Andrew Rybchenko



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Thursday, August 26, 2021 7:58 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>
> Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> 
> On Thu, Aug 19, 2021 at 5:39 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Thursday, August 19, 2021 1:27 PM
> > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> > > <andrew.rybchenko@oktetlabs.ru>
> > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > >
> > > On Wed, Aug 18, 2021 at 4:44 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Tuesday, August 17, 2021 11:12 PM
> > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > >
> > > > > On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > Sent: Tuesday, August 17, 2021 5:33 PM
> > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx
> > > > > > > queue
> > > > > > >
> > > > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > > > > > > >
> > > > > > > > In current DPDK framework, each RX queue is pre-loaded
> > > > > > > > with mbufs for incoming packets. When number of
> > > > > > > > representors scale out in a switch domain, the memory
> > > > > > > > consumption became significant. Most important, polling
> > > > > > > > all ports leads to high cache miss, high latency and low throughput.
> > > > > > > >
> > > > > > > > This patch introduces shared RX queue. Ports with same
> > > > > > > > configuration in a switch domain could share RX queue set by specifying sharing group.
> > > > > > > > Polling any queue using same shared RX queue receives
> > > > > > > > packets from all member ports. Source port is identified by mbuf->port.
> > > > > > > >
> > > > > > > > Port queue number in a shared group should be identical.
> > > > > > > > Queue index is
> > > > > > > > 1:1 mapped in shared group.
> > > > > > > >
> > > > > > > > Share RX queue must be polled on single thread or core.
> > > > > > > >
> > > > > > > > Multiple groups is supported by group ID.
> > > > > > > >
> > > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > > Cc: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > ---
> > > > > > > > Rx queue object could be used as shared Rx queue object,
> > > > > > > > it's important to clear all queue control callback api that using queue object:
> > > > > > > >
> > > > > > > > https://mails.dpdk.org/archives/dev/2021-July/215574.html
> > > > > > >
> > > > > > > >  #undef RTE_RX_OFFLOAD_BIT2STR diff --git
> > > > > > > > a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > > > > > > d2b27c351f..a578c9db9d 100644
> > > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > > >         uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> > > > > > > >         uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > > > > > > >         uint16_t rx_nseg; /**< Number of descriptions in rx_seg array.
> > > > > > > > */
> > > > > > > > +       uint32_t shared_group; /**< Shared port group
> > > > > > > > + index in switch domain. */
> > > > > > >
> > > > > > > Not to able to see anyone setting/creating this group ID test application.
> > > > > > > How this group is created?
> > > > > >
> > > > > > Nice catch, the initial testpmd version only support one default group(0).
> > > > > > All ports that supports shared-rxq assigned in same group.
> > > > > >
> > > > > > We should be able to change "--rxq-shared" to "--rxq-shared-group"
> > > > > > to support group other than default.
> > > > > >
> > > > > > To support more groups simultaneously, need to consider
> > > > > > testpmd forwarding stream core assignment, all streams in same group need to stay on same core.
> > > > > > It's possible to specify how many ports to increase group
> > > > > > number, but user must schedule stream affinity carefully - error prone.
> > > > > >
> > > > > > On the other hand, one group should be sufficient for most
> > > > > > customer, the doubt is whether it valuable to support multiple groups test.
> > > > >
> > > > > Ack. One group is enough in testpmd.
> > > > >
> > > > > My question was more about who and how this group is created,
> > > > > Should n't we need API to create shared_group? If we do the following, at least, I can think, how it can be implemented in SW
> or other HW.
> > > > >
> > > > > - Create aggregation queue group
> > > > > - Attach multiple  Rx queues to the aggregation queue group
> > > > > - Pull the packets from the queue group(which internally fetch
> > > > > from the Rx queues _attached_)
> > > > >
> > > > > Does the above kind of sequence, break your representor use case?
> > > >
> > > > Seems more like a set of EAL wrapper. Current API tries to minimize the application efforts to adapt shared-rxq.
> > > > - step 1, not sure how important it is to create group with API, in rte_flow, group is created on demand.
> > >
> > > Which rte_flow pattern/action for this?
> >
> > No rte_flow for this, just recalled that the group in rte_flow is not created along with flow, not via api.
> > I don’t see anything else to create along with group, just double whether it valuable to introduce a new api set to manage group.
> 
> See below.
> 
> >
> > >
> > > > - step 2, currently, the attaching is done in rte_eth_rx_queue_setup, specify offload and group in rx_conf struct.
> > > > - step 3, define a dedicate api to receive packets from shared rxq? Looks clear to receive packets from shared rxq.
> > > >   currently, rxq objects in share group is same - the shared rxq, so the eth callback eth_rx_burst_t(rxq_obj, mbufs, n) could
> > > >   be used to receive packets from any ports in group, normally the first port(PF) in group.
> > > >   An alternative way is defining a vdev with same queue number and copy rxq objects will make the vdev a proxy of
> > > >   the shared rxq group - this could be an helper API.
> > > >
> > > > Anyway the wrapper doesn't break use case, step 3 api is more clear, need to understand how to implement efficiently.
> > >
> > > Are you doing this feature based on any HW support or it just pure
> > > SW thing, If it is SW, It is better to have just new vdev for like drivers/net/bonding/. This we can help aggregate multiple Rxq across
> the multiple ports of same the driver.
> >
> > Based on HW support.
> 
> In Marvel HW, we do some support, I will outline here and some queries on this.
> 
> # We need to create some new HW structure for aggregation # Connect each Rxq to the new HW structure for aggregation # Use
> rx_burst from the new HW structure.
> 
> Could you outline your HW support?
> 
> Also, I am not able to understand how this will reduce the memory, atleast in our HW need creating more memory now to deal this as
> we need to deal new HW structure.
> 
> How is in your HW it reduces the memory? Also, if memory is the constraint, why NOT reduce the number of queues.
> 

Glad to know that Marvel is working on this, what's the status of driver implementation?

In my PMD implementation, it's very similar, a new HW object shared memory pool is created to replace per rxq memory pool.
Legacy rxq feed queue with allocated mbufs as number of descriptors, now shared rxqs share the same pool, no need to supply
mbufs for each rxq, just feed the shared rxq.

So the memory saving reflects to mbuf per rxq, even 1000 representors in shared rxq group, the mbufs consumed is one rxq.
In other words, new members in shared rxq doesn’t allocate new mbufs to feed rxq, just share with existing shared rxq(HW mempool).
The memory required to setup each rxq doesn't change too much, agree.

> # Also, I was thinking, one way to avoid the fast path or ABI change would like.
> 
> # Driver Initializes one more eth_dev_ops in driver as aggregator ethdev # devargs of new ethdev or specific API like
> drivers/net/bonding/rte_eth_bond.h can take the argument (port, queue) tuples which needs to aggregate by new ethdev port # No
> change in fastpath or ABI is required in this model.
> 

This could be an option to access shared rxq. What's the difference of the new PMD?
What's the difference of PMD driver to create the new device? 

Is it important in your implementation? Does it work with existing rx_burst api?

> 
> 
> > Most user might uses PF in group as the anchor port to rx burst, current definition should be easy for them to migrate.
> > but some user might prefer grouping some hot
> > plug/unpluggedrepresentors, EAL could provide wrappers, users could do that either due to the strategy not complex enough.
> Anyway, welcome any suggestion.
> >
> > >
> > >
> > > >
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > >         /**
> > > > > > > >          * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > > > > > > >          * Only offloads set on rx_queue_offload_capa or
> > > > > > > > rx_offload_capa @@ -1373,6 +1374,12 @@ struct rte_eth_conf
> > > > > > > > { #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
> > > > > > > >  #define DEV_RX_OFFLOAD_RSS_HASH                0x00080000
> > > > > > > >  #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> > > > > > > > +/**
> > > > > > > > + * Rx queue is shared among ports in same switch domain
> > > > > > > > +to save memory,
> > > > > > > > + * avoid polling each port. Any port in group can be used to receive packets.
> > > > > > > > + * Real source port number saved in mbuf->port field.
> > > > > > > > + */
> > > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ   0x00200000
> > > > > > > >
> > > > > > > >  #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > > >                                  DEV_RX_OFFLOAD_UDP_CKSUM
> > > > > > > > | \
> > > > > > > > --
> > > > > > > > 2.25.1
> > > > > > > >

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] Marvell v21.11 Roadmap
@ 2021-08-29  8:48  4% Jerin Jacob Kollanukkaran
  0 siblings, 0 replies; 200+ results
From: Jerin Jacob Kollanukkaran @ 2021-08-29  8:48 UTC (permalink / raw)
  To: dev
  Cc: Satananda Burla, Prasun Kapoor, GR-ipbu-jerinj-team, Radha Chintakuntla

EAL:
- add oops support
http://patches.dpdk.org/project/dpdk/patch/20210817032723.3997054-2-jerinj@marvell.com/
- make rte_intr_handle hidden for better ABI
https://patches.dpdk.org/project/dpdk/patch/20210826145726.102081-2-hkalra@marvell.com/

ethdev:
- mtr: enhance input color table features 
http://patches.dpdk.org/project/dpdk/patch/20210820082401.3778736-1-jerinj@marvell.com/
- introduce IP reassembly in inline protocol offload
https://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/

cryptodev:
- ABI improvements to hide internal interface
- Improve crypto/security session data structure to remove unnecessary indirection in data path.
https://patches.dpdk.org/project/dpdk/patch/20210731181327.660296-2-gakhil@marvell.com/

Security:
- Unit test application for IPsec
https://patches.dpdk.org/project/dpdk/list/?series=18253
	- Known vector tests & combined mode tests
	- AES-GCM
	- IV generation
	- Negative test ICV corruption
	- UDP encapsulation
- Lifetime configuration & extension of rte_crypto_op for soft expiry notification
- IV gen disable for known vector tests with security outbound operations
- IP hdr verify & UDP port verification with security inbound operations
- Inner checksum computation/verification support in security operations

eventdev:
- eventdev: ABI rework to make driver interface internal.
https://patches.dpdk.org/project/dpdk/patch/20210823194020.1229-1-pbhagavatula@marvell.com/
- event vector support for l3fwd application
- event vector support ipsec-gw application
https://patches.dpdk.org/project/dpdk/patch/20210826100300.4007363-1-schalla@marvell.com/

net/cnxk:
- Inline IPSec support for both poll mode and event mode in CN10K and event mode in CN9K.(AES-GCM, AES-CBC SHA1)
- Egress Traffic manager support for CN9K and CN10k SoC's.
- Ingress Traffic metering and Policing support for CN10K with 3 level hierarchy.

crypto/cnxk: 
- Lookaside protocol support with crypto_cn9k (AES-GCM)
- Crypto adapter support with crypto_cn9k & crypto_cn10k
- Lookaside protocol additional features,
	- AES-CBC-SHA1 HMAC
	- UDP encapsulation
	- Transport
- ZUC 256

mempool/cnxk: 
- add telemetry endpoints

event/cnxk: 
- add telemetry endpoints

dma/cnxk: 
- Add cnxk driver based on new DMA APIs


^ permalink raw reply	[relevance 4%]

* [dpdk-dev] [PATCH 2/8] cryptodev: move inline APIs into separate structure
  @ 2021-08-29 12:51  3% ` Akhil Goyal
  0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2021-08-29 12:51 UTC (permalink / raw)
  To: dev
  Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
	konstantin.ananyev, thomas, roy.fan.zhang, asomalap,
	ruifeng.wang, ajit.khaparde, pablo.de.lara.guarch, fiona.trahe,
	adwivedi, michaelsh, rnagadheeraj, jianjay.zhou, jerinj,
	Akhil Goyal

Move fastpath inline function pointers from rte_cryptodev into a
separate structure accessed via a flat array.
The intension is to make rte_cryptodev and related structures private
to avoid future API/ABI breakages.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>
---
 lib/cryptodev/cryptodev_pmd.c      | 33 ++++++++++++++++++++++++++++++
 lib/cryptodev/cryptodev_pmd.h      |  9 ++++++++
 lib/cryptodev/rte_cryptodev.c      |  3 +++
 lib/cryptodev/rte_cryptodev_core.h | 19 +++++++++++++++++
 lib/cryptodev/version.map          |  4 ++++
 5 files changed, 68 insertions(+)

diff --git a/lib/cryptodev/cryptodev_pmd.c b/lib/cryptodev/cryptodev_pmd.c
index 71e34140cd..46772dc355 100644
--- a/lib/cryptodev/cryptodev_pmd.c
+++ b/lib/cryptodev/cryptodev_pmd.c
@@ -158,3 +158,36 @@ rte_cryptodev_pmd_destroy(struct rte_cryptodev *cryptodev)
 
 	return 0;
 }
+
+static uint16_t
+dummy_crypto_enqueue_burst(__rte_unused uint8_t dev_id,
+			  __rte_unused uint8_t qp_id,
+			  __rte_unused struct rte_crypto_op **ops,
+			  __rte_unused uint16_t nb_ops)
+{
+	CDEV_LOG_ERR(
+		"crypto enqueue burst requested for unconfigured crypto device");
+	return 0;
+}
+
+static uint16_t
+dummy_crypto_dequeue_burst(__rte_unused uint8_t dev_id,
+			  __rte_unused uint8_t qp_id,
+			  __rte_unused struct rte_crypto_op **ops,
+			  __rte_unused uint16_t nb_ops)
+{
+	CDEV_LOG_ERR(
+		"crypto enqueue burst requested for unconfigured crypto device");
+	return 0;
+}
+
+void
+rte_cryptodev_api_reset(struct rte_cryptodev_api *api)
+{
+	static const struct rte_cryptodev_api dummy = {
+		.enqueue_burst = dummy_crypto_enqueue_burst,
+		.dequeue_burst = dummy_crypto_dequeue_burst,
+	};
+
+	*api = dummy;
+}
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index f775ba6beb..eeaea13a23 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -520,6 +520,15 @@ RTE_INIT(init_ ##driver_id)\
 	driver_id = rte_cryptodev_allocate_driver(&crypto_drv, &(drv));\
 }
 
+/**
+ * Reset crypto device fastpath APIs to dummy values.
+ *
+ * @param The *api* pointer to reset.
+ */
+__rte_internal
+void
+rte_cryptodev_api_reset(struct rte_cryptodev_api *api);
+
 static inline void *
 get_sym_session_private_data(const struct rte_cryptodev_sym_session *sess,
 		uint8_t driver_id) {
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 9fa3aff1d3..26f8390668 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -54,6 +54,9 @@ static struct rte_cryptodev_global cryptodev_globals = {
 		.nb_devs		= 0
 };
 
+/* Public fastpath APIs. */
+struct rte_cryptodev_api *rte_cryptodev_api;
+
 /* spinlock for crypto device callbacks */
 static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;
 
diff --git a/lib/cryptodev/rte_cryptodev_core.h b/lib/cryptodev/rte_cryptodev_core.h
index 1633e55889..ec38f70e0c 100644
--- a/lib/cryptodev/rte_cryptodev_core.h
+++ b/lib/cryptodev/rte_cryptodev_core.h
@@ -25,6 +25,25 @@ typedef uint16_t (*enqueue_pkt_burst_t)(void *qp,
 		struct rte_crypto_op **ops,	uint16_t nb_ops);
 /**< Enqueue packets for processing on queue pair of a device. */
 
+typedef uint16_t (*rte_crypto_dequeue_burst_t)(uint8_t dev_id, uint8_t qp_id,
+					      struct rte_crypto_op **ops,
+					      uint16_t nb_ops);
+/**< @internal Dequeue processed packets from queue pair of a device. */
+typedef uint16_t (*rte_crypto_enqueue_burst_t)(uint8_t dev_id, uint8_t qp_id,
+					      struct rte_crypto_op **ops,
+					      uint16_t nb_ops);
+/**< @internal Enqueue packets for processing on queue pair of a device. */
+
+struct rte_cryptodev_api {
+	rte_crypto_enqueue_burst_t enqueue_burst;
+	/**< PMD enqueue burst function. */
+	rte_crypto_dequeue_burst_t dequeue_burst;
+	/**< PMD dequeue burst function. */
+	uintptr_t reserved[6];
+} __rte_cache_aligned;
+
+extern struct rte_cryptodev_api *rte_cryptodev_api;
+
 /**
  * @internal
  * The data part, with no function pointers, associated with each device.
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index 2fdf70002d..050089ae55 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -57,6 +57,9 @@ DPDK_22 {
 	rte_cryptodev_sym_session_init;
 	rte_cryptodevs;
 
+	#added in 21.11
+	rte_cryptodev_api;
+
 	local: *;
 };
 
@@ -114,6 +117,7 @@ INTERNAL {
 	global:
 
 	rte_cryptodev_allocate_driver;
+	rte_cryptodev_api_reset;
 	rte_cryptodev_pmd_allocate;
 	rte_cryptodev_pmd_callback_process;
 	rte_cryptodev_pmd_create;
-- 
2.25.1


^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue
  2021-08-28 14:16  0%                 ` Xueming(Steven) Li
@ 2021-08-30  9:31  3%                   ` Jerin Jacob
  2021-08-30 10:13  0%                     ` Xueming(Steven) Li
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-08-30  9:31 UTC (permalink / raw)
  To: Xueming(Steven) Li
  Cc: dpdk-dev, Ferruh Yigit, NBU-Contact-Thomas Monjalon, Andrew Rybchenko

On Sat, Aug 28, 2021 at 7:46 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Thursday, August 26, 2021 7:58 PM
> > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> > Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> >
> > On Thu, Aug 19, 2021 at 5:39 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Thursday, August 19, 2021 1:27 PM
> > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> > > > <andrew.rybchenko@oktetlabs.ru>
> > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > >
> > > > On Wed, Aug 18, 2021 at 4:44 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > Sent: Tuesday, August 17, 2021 11:12 PM
> > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > > >
> > > > > > On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > > -----Original Message-----
> > > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > Sent: Tuesday, August 17, 2021 5:33 PM
> > > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx
> > > > > > > > queue
> > > > > > > >
> > > > > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > > > > > > > >
> > > > > > > > > In current DPDK framework, each RX queue is pre-loaded
> > > > > > > > > with mbufs for incoming packets. When number of
> > > > > > > > > representors scale out in a switch domain, the memory
> > > > > > > > > consumption became significant. Most important, polling
> > > > > > > > > all ports leads to high cache miss, high latency and low throughput.
> > > > > > > > >
> > > > > > > > > This patch introduces shared RX queue. Ports with same
> > > > > > > > > configuration in a switch domain could share RX queue set by specifying sharing group.
> > > > > > > > > Polling any queue using same shared RX queue receives
> > > > > > > > > packets from all member ports. Source port is identified by mbuf->port.
> > > > > > > > >
> > > > > > > > > Port queue number in a shared group should be identical.
> > > > > > > > > Queue index is
> > > > > > > > > 1:1 mapped in shared group.
> > > > > > > > >
> > > > > > > > > Share RX queue must be polled on single thread or core.
> > > > > > > > >
> > > > > > > > > Multiple groups is supported by group ID.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > > > Cc: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > ---
> > > > > > > > > Rx queue object could be used as shared Rx queue object,
> > > > > > > > > it's important to clear all queue control callback api that using queue object:
> > > > > > > > >
> > > > > > > > > https://mails.dpdk.org/archives/dev/2021-July/215574.html
> > > > > > > >
> > > > > > > > >  #undef RTE_RX_OFFLOAD_BIT2STR diff --git
> > > > > > > > > a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > > > > > > > d2b27c351f..a578c9db9d 100644
> > > > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > > > >         uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> > > > > > > > >         uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > > > > > > > >         uint16_t rx_nseg; /**< Number of descriptions in rx_seg array.
> > > > > > > > > */
> > > > > > > > > +       uint32_t shared_group; /**< Shared port group
> > > > > > > > > + index in switch domain. */
> > > > > > > >
> > > > > > > > Not to able to see anyone setting/creating this group ID test application.
> > > > > > > > How this group is created?
> > > > > > >
> > > > > > > Nice catch, the initial testpmd version only support one default group(0).
> > > > > > > All ports that supports shared-rxq assigned in same group.
> > > > > > >
> > > > > > > We should be able to change "--rxq-shared" to "--rxq-shared-group"
> > > > > > > to support group other than default.
> > > > > > >
> > > > > > > To support more groups simultaneously, need to consider
> > > > > > > testpmd forwarding stream core assignment, all streams in same group need to stay on same core.
> > > > > > > It's possible to specify how many ports to increase group
> > > > > > > number, but user must schedule stream affinity carefully - error prone.
> > > > > > >
> > > > > > > On the other hand, one group should be sufficient for most
> > > > > > > customer, the doubt is whether it valuable to support multiple groups test.
> > > > > >
> > > > > > Ack. One group is enough in testpmd.
> > > > > >
> > > > > > My question was more about who and how this group is created,
> > > > > > Should n't we need API to create shared_group? If we do the following, at least, I can think, how it can be implemented in SW
> > or other HW.
> > > > > >
> > > > > > - Create aggregation queue group
> > > > > > - Attach multiple  Rx queues to the aggregation queue group
> > > > > > - Pull the packets from the queue group(which internally fetch
> > > > > > from the Rx queues _attached_)
> > > > > >
> > > > > > Does the above kind of sequence, break your representor use case?
> > > > >
> > > > > Seems more like a set of EAL wrapper. Current API tries to minimize the application efforts to adapt shared-rxq.
> > > > > - step 1, not sure how important it is to create group with API, in rte_flow, group is created on demand.
> > > >
> > > > Which rte_flow pattern/action for this?
> > >
> > > No rte_flow for this, just recalled that the group in rte_flow is not created along with flow, not via api.
> > > I don’t see anything else to create along with group, just double whether it valuable to introduce a new api set to manage group.
> >
> > See below.
> >
> > >
> > > >
> > > > > - step 2, currently, the attaching is done in rte_eth_rx_queue_setup, specify offload and group in rx_conf struct.
> > > > > - step 3, define a dedicate api to receive packets from shared rxq? Looks clear to receive packets from shared rxq.
> > > > >   currently, rxq objects in share group is same - the shared rxq, so the eth callback eth_rx_burst_t(rxq_obj, mbufs, n) could
> > > > >   be used to receive packets from any ports in group, normally the first port(PF) in group.
> > > > >   An alternative way is defining a vdev with same queue number and copy rxq objects will make the vdev a proxy of
> > > > >   the shared rxq group - this could be an helper API.
> > > > >
> > > > > Anyway the wrapper doesn't break use case, step 3 api is more clear, need to understand how to implement efficiently.
> > > >
> > > > Are you doing this feature based on any HW support or it just pure
> > > > SW thing, If it is SW, It is better to have just new vdev for like drivers/net/bonding/. This we can help aggregate multiple Rxq across
> > the multiple ports of same the driver.
> > >
> > > Based on HW support.
> >
> > In Marvel HW, we do some support, I will outline here and some queries on this.
> >
> > # We need to create some new HW structure for aggregation # Connect each Rxq to the new HW structure for aggregation # Use
> > rx_burst from the new HW structure.
> >
> > Could you outline your HW support?
> >
> > Also, I am not able to understand how this will reduce the memory, atleast in our HW need creating more memory now to deal this as
> > we need to deal new HW structure.
> >
> > How is in your HW it reduces the memory? Also, if memory is the constraint, why NOT reduce the number of queues.
> >
>
> Glad to know that Marvel is working on this, what's the status of driver implementation?
>
> In my PMD implementation, it's very similar, a new HW object shared memory pool is created to replace per rxq memory pool.
> Legacy rxq feed queue with allocated mbufs as number of descriptors, now shared rxqs share the same pool, no need to supply
> mbufs for each rxq, just feed the shared rxq.
>
> So the memory saving reflects to mbuf per rxq, even 1000 representors in shared rxq group, the mbufs consumed is one rxq.
> In other words, new members in shared rxq doesn’t allocate new mbufs to feed rxq, just share with existing shared rxq(HW mempool).
> The memory required to setup each rxq doesn't change too much, agree.

We can ask the application to configure the same mempool for multiple
RQ too. RIght? If the saving is based on sharing the mempool
with multiple RQs.

>
> > # Also, I was thinking, one way to avoid the fast path or ABI change would like.
> >
> > # Driver Initializes one more eth_dev_ops in driver as aggregator ethdev # devargs of new ethdev or specific API like
> > drivers/net/bonding/rte_eth_bond.h can take the argument (port, queue) tuples which needs to aggregate by new ethdev port # No
> > change in fastpath or ABI is required in this model.
> >
>
> This could be an option to access shared rxq. What's the difference of the new PMD?

No ABI and fast change are required.

> What's the difference of PMD driver to create the new device?
>
> Is it important in your implementation? Does it work with existing rx_burst api?

Yes . It will work with the existing rx_burst API.

>
> >
> >
> > > Most user might uses PF in group as the anchor port to rx burst, current definition should be easy for them to migrate.
> > > but some user might prefer grouping some hot
> > > plug/unpluggedrepresentors, EAL could provide wrappers, users could do that either due to the strategy not complex enough.
> > Anyway, welcome any suggestion.
> > >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > >         /**
> > > > > > > > >          * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > > > > > > > >          * Only offloads set on rx_queue_offload_capa or
> > > > > > > > > rx_offload_capa @@ -1373,6 +1374,12 @@ struct rte_eth_conf
> > > > > > > > > { #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
> > > > > > > > >  #define DEV_RX_OFFLOAD_RSS_HASH                0x00080000
> > > > > > > > >  #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> > > > > > > > > +/**
> > > > > > > > > + * Rx queue is shared among ports in same switch domain
> > > > > > > > > +to save memory,
> > > > > > > > > + * avoid polling each port. Any port in group can be used to receive packets.
> > > > > > > > > + * Real source port number saved in mbuf->port field.
> > > > > > > > > + */
> > > > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ   0x00200000
> > > > > > > > >
> > > > > > > > >  #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > > > >                                  DEV_RX_OFFLOAD_UDP_CKSUM
> > > > > > > > > | \
> > > > > > > > > --
> > > > > > > > > 2.25.1
> > > > > > > > >

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2 01/15] ethdev: introduce shared Rx queue
  2021-08-30  9:31  3%                   ` Jerin Jacob
@ 2021-08-30 10:13  0%                     ` Xueming(Steven) Li
  0 siblings, 0 replies; 200+ results
From: Xueming(Steven) Li @ 2021-08-30 10:13 UTC (permalink / raw)
  To: Jerin Jacob
  Cc: dpdk-dev, Ferruh Yigit, NBU-Contact-Thomas Monjalon, Andrew Rybchenko



> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Monday, August 30, 2021 5:31 PM
> To: Xueming(Steven) Li <xuemingl@nvidia.com>
> Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon <thomas@monjalon.net>;
> Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> 
> On Sat, Aug 28, 2021 at 7:46 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Thursday, August 26, 2021 7:58 PM
> > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit <ferruh.yigit@intel.com>;
> > > NBU-Contact-Thomas Monjalon <thomas@monjalon.net>; Andrew Rybchenko
> > > <andrew.rybchenko@oktetlabs.ru>
> > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > >
> > > On Thu, Aug 19, 2021 at 5:39 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Thursday, August 19, 2021 1:27 PM
> > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx queue
> > > > >
> > > > > On Wed, Aug 18, 2021 at 4:44 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > >
> > > > > >
> > > > > >
> > > > > > > -----Original Message-----
> > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > Sent: Tuesday, August 17, 2021 11:12 PM
> > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared Rx
> > > > > > > queue
> > > > > > >
> > > > > > > On Tue, Aug 17, 2021 at 5:01 PM Xueming(Steven) Li <xuemingl@nvidia.com> wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > > -----Original Message-----
> > > > > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > Sent: Tuesday, August 17, 2021 5:33 PM
> > > > > > > > > To: Xueming(Steven) Li <xuemingl@nvidia.com>
> > > > > > > > > Cc: dpdk-dev <dev@dpdk.org>; Ferruh Yigit
> > > > > > > > > <ferruh.yigit@intel.com>; NBU-Contact-Thomas Monjalon
> > > > > > > > > <thomas@monjalon.net>; Andrew Rybchenko
> > > > > > > > > <andrew.rybchenko@oktetlabs.ru>
> > > > > > > > > Subject: Re: [PATCH v2 01/15] ethdev: introduce shared
> > > > > > > > > Rx queue
> > > > > > > > >
> > > > > > > > > On Wed, Aug 11, 2021 at 7:34 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > > > > > > > > >
> > > > > > > > > > In current DPDK framework, each RX queue is pre-loaded
> > > > > > > > > > with mbufs for incoming packets. When number of
> > > > > > > > > > representors scale out in a switch domain, the memory
> > > > > > > > > > consumption became significant. Most important,
> > > > > > > > > > polling all ports leads to high cache miss, high latency and low throughput.
> > > > > > > > > >
> > > > > > > > > > This patch introduces shared RX queue. Ports with same
> > > > > > > > > > configuration in a switch domain could share RX queue set by specifying sharing group.
> > > > > > > > > > Polling any queue using same shared RX queue receives
> > > > > > > > > > packets from all member ports. Source port is identified by mbuf->port.
> > > > > > > > > >
> > > > > > > > > > Port queue number in a shared group should be identical.
> > > > > > > > > > Queue index is
> > > > > > > > > > 1:1 mapped in shared group.
> > > > > > > > > >
> > > > > > > > > > Share RX queue must be polled on single thread or core.
> > > > > > > > > >
> > > > > > > > > > Multiple groups is supported by group ID.
> > > > > > > > > >
> > > > > > > > > > Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> > > > > > > > > > Cc: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > > > > > ---
> > > > > > > > > > Rx queue object could be used as shared Rx queue
> > > > > > > > > > object, it's important to clear all queue control callback api that using queue object:
> > > > > > > > > >
> > > > > > > > > > https://mails.dpdk.org/archives/dev/2021-July/215574.h
> > > > > > > > > > tml
> > > > > > > > >
> > > > > > > > > >  #undef RTE_RX_OFFLOAD_BIT2STR diff --git
> > > > > > > > > > a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> > > > > > > > > > index d2b27c351f..a578c9db9d 100644
> > > > > > > > > > --- a/lib/ethdev/rte_ethdev.h
> > > > > > > > > > +++ b/lib/ethdev/rte_ethdev.h
> > > > > > > > > > @@ -1047,6 +1047,7 @@ struct rte_eth_rxconf {
> > > > > > > > > >         uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
> > > > > > > > > >         uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> > > > > > > > > >         uint16_t rx_nseg; /**< Number of descriptions in rx_seg array.
> > > > > > > > > > */
> > > > > > > > > > +       uint32_t shared_group; /**< Shared port group
> > > > > > > > > > + index in switch domain. */
> > > > > > > > >
> > > > > > > > > Not to able to see anyone setting/creating this group ID test application.
> > > > > > > > > How this group is created?
> > > > > > > >
> > > > > > > > Nice catch, the initial testpmd version only support one default group(0).
> > > > > > > > All ports that supports shared-rxq assigned in same group.
> > > > > > > >
> > > > > > > > We should be able to change "--rxq-shared" to "--rxq-shared-group"
> > > > > > > > to support group other than default.
> > > > > > > >
> > > > > > > > To support more groups simultaneously, need to consider
> > > > > > > > testpmd forwarding stream core assignment, all streams in same group need to stay on same core.
> > > > > > > > It's possible to specify how many ports to increase group
> > > > > > > > number, but user must schedule stream affinity carefully - error prone.
> > > > > > > >
> > > > > > > > On the other hand, one group should be sufficient for most
> > > > > > > > customer, the doubt is whether it valuable to support multiple groups test.
> > > > > > >
> > > > > > > Ack. One group is enough in testpmd.
> > > > > > >
> > > > > > > My question was more about who and how this group is
> > > > > > > created, Should n't we need API to create shared_group? If
> > > > > > > we do the following, at least, I can think, how it can be
> > > > > > > implemented in SW
> > > or other HW.
> > > > > > >
> > > > > > > - Create aggregation queue group
> > > > > > > - Attach multiple  Rx queues to the aggregation queue group
> > > > > > > - Pull the packets from the queue group(which internally
> > > > > > > fetch from the Rx queues _attached_)
> > > > > > >
> > > > > > > Does the above kind of sequence, break your representor use case?
> > > > > >
> > > > > > Seems more like a set of EAL wrapper. Current API tries to minimize the application efforts to adapt shared-rxq.
> > > > > > - step 1, not sure how important it is to create group with API, in rte_flow, group is created on demand.
> > > > >
> > > > > Which rte_flow pattern/action for this?
> > > >
> > > > No rte_flow for this, just recalled that the group in rte_flow is not created along with flow, not via api.
> > > > I don’t see anything else to create along with group, just double whether it valuable to introduce a new api set to manage group.
> > >
> > > See below.
> > >
> > > >
> > > > >
> > > > > > - step 2, currently, the attaching is done in rte_eth_rx_queue_setup, specify offload and group in rx_conf struct.
> > > > > > - step 3, define a dedicate api to receive packets from shared rxq? Looks clear to receive packets from shared rxq.
> > > > > >   currently, rxq objects in share group is same - the shared rxq, so the eth callback eth_rx_burst_t(rxq_obj, mbufs, n) could
> > > > > >   be used to receive packets from any ports in group, normally the first port(PF) in group.
> > > > > >   An alternative way is defining a vdev with same queue number and copy rxq objects will make the vdev a proxy of
> > > > > >   the shared rxq group - this could be an helper API.
> > > > > >
> > > > > > Anyway the wrapper doesn't break use case, step 3 api is more clear, need to understand how to implement efficiently.
> > > > >
> > > > > Are you doing this feature based on any HW support or it just
> > > > > pure SW thing, If it is SW, It is better to have just new vdev
> > > > > for like drivers/net/bonding/. This we can help aggregate
> > > > > multiple Rxq across
> > > the multiple ports of same the driver.
> > > >
> > > > Based on HW support.
> > >
> > > In Marvel HW, we do some support, I will outline here and some queries on this.
> > >
> > > # We need to create some new HW structure for aggregation # Connect
> > > each Rxq to the new HW structure for aggregation # Use rx_burst from the new HW structure.
> > >
> > > Could you outline your HW support?
> > >
> > > Also, I am not able to understand how this will reduce the memory,
> > > atleast in our HW need creating more memory now to deal this as we need to deal new HW structure.
> > >
> > > How is in your HW it reduces the memory? Also, if memory is the constraint, why NOT reduce the number of queues.
> > >
> >
> > Glad to know that Marvel is working on this, what's the status of driver implementation?
> >
> > In my PMD implementation, it's very similar, a new HW object shared memory pool is created to replace per rxq memory pool.
> > Legacy rxq feed queue with allocated mbufs as number of descriptors,
> > now shared rxqs share the same pool, no need to supply mbufs for each rxq, just feed the shared rxq.
> >
> > So the memory saving reflects to mbuf per rxq, even 1000 representors in shared rxq group, the mbufs consumed is one rxq.
> > In other words, new members in shared rxq doesn’t allocate new mbufs to feed rxq, just share with existing shared rxq(HW
> mempool).
> > The memory required to setup each rxq doesn't change too much, agree.
> 
> We can ask the application to configure the same mempool for multiple RQ too. RIght? If the saving is based on sharing the mempool
> with multiple RQs.

Yes, using the same mempool is the fundamental. The difference is how many mbufs allocate from pool.
Assuming 512 descriptors perf rxq and 4 rxqs per device, it's 2.3K(mbuf) * 512 * 4 = 4.6M / device
To support 1000 representors, need a 4.6G mempool :)
For shared rxq, only 4.6M(one device) mbufs allocate from mempool, they are shared for all rxqs in group.

> 
> >
> > > # Also, I was thinking, one way to avoid the fast path or ABI change would like.
> > >
> > > # Driver Initializes one more eth_dev_ops in driver as aggregator
> > > ethdev # devargs of new ethdev or specific API like
> > > drivers/net/bonding/rte_eth_bond.h can take the argument (port, queue) tuples which needs to aggregate by new ethdev port #
> No change in fastpath or ABI is required in this model.
> > >
> >
> > This could be an option to access shared rxq. What's the difference of the new PMD?
> 
> No ABI and fast change are required.
> 
> > What's the difference of PMD driver to create the new device?
> >
> > Is it important in your implementation? Does it work with existing rx_burst api?
> 
> Yes . It will work with the existing rx_burst API.
> 
> >
> > >
> > >
> > > > Most user might uses PF in group as the anchor port to rx burst, current definition should be easy for them to migrate.
> > > > but some user might prefer grouping some hot
> > > > plug/unpluggedrepresentors, EAL could provide wrappers, users could do that either due to the strategy not complex enough.
> > > Anyway, welcome any suggestion.
> > > >
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > >         /**
> > > > > > > > > >          * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> > > > > > > > > >          * Only offloads set on rx_queue_offload_capa
> > > > > > > > > > or rx_offload_capa @@ -1373,6 +1374,12 @@ struct
> > > > > > > > > > rte_eth_conf { #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
> > > > > > > > > >  #define DEV_RX_OFFLOAD_RSS_HASH                0x00080000
> > > > > > > > > >  #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
> > > > > > > > > > +/**
> > > > > > > > > > + * Rx queue is shared among ports in same switch
> > > > > > > > > > +domain to save memory,
> > > > > > > > > > + * avoid polling each port. Any port in group can be used to receive packets.
> > > > > > > > > > + * Real source port number saved in mbuf->port field.
> > > > > > > > > > + */
> > > > > > > > > > +#define RTE_ETH_RX_OFFLOAD_SHARED_RXQ   0x00200000
> > > > > > > > > >
> > > > > > > > > >  #define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
> > > > > > > > > >
> > > > > > > > > > DEV_RX_OFFLOAD_UDP_CKSUM
> > > > > > > > > > | \
> > > > > > > > > > --
> > > > > > > > > > 2.25.1
> > > > > > > > > >

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [RFC] eventdev: uninline inline API functions
  @ 2021-08-30 16:00  2% ` Mattias Rönnblom
  2021-08-31 12:28  0%   ` Jerin Jacob
  0 siblings, 1 reply; 200+ results
From: Mattias Rönnblom @ 2021-08-30 16:00 UTC (permalink / raw)
  To: jerinj; +Cc: pbhagavatula, dev, bogdan.tanasa, Mattias Rönnblom

Replace the inline functions in the eventdev user application API with
regular non-inline API calls. This allows for a cleaner and more
simple API/ABI, but might well also cause performance regressions.

The purpose of this RFC patch is to allow for performance testing.

The rte_eventdev struct declaration should be moved off the public
API.

Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
---
 drivers/net/octeontx/octeontx_ethdev.h  |  1 +
 lib/eventdev/rte_event_eth_rx_adapter.h |  1 +
 lib/eventdev/rte_event_eth_tx_adapter.c | 31 ++++++++
 lib/eventdev/rte_event_eth_tx_adapter.h | 35 ++-------
 lib/eventdev/rte_eventdev.c             | 82 +++++++++++++++++++++
 lib/eventdev/rte_eventdev.h             | 94 +++----------------------
 lib/eventdev/version.map                |  4 ++
 7 files changed, 134 insertions(+), 114 deletions(-)

diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37..9402105fcf 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -9,6 +9,7 @@
 
 #include <rte_common.h>
 #include <ethdev_driver.h>
+#include <eventdev_pmd.h>
 #include <rte_eventdev.h>
 #include <rte_mempool.h>
 #include <rte_memory.h>
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
index 182dd2e5dd..79f4822fb0 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.h
+++ b/lib/eventdev/rte_event_eth_rx_adapter.h
@@ -84,6 +84,7 @@ extern "C" {
 #include <rte_service.h>
 
 #include "rte_eventdev.h"
+#include "eventdev_pmd.h"
 
 #define RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE 32
 
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
index 18c0359db7..74f88e6147 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.c
+++ b/lib/eventdev/rte_event_eth_tx_adapter.c
@@ -1154,6 +1154,37 @@ rte_event_eth_tx_adapter_start(uint8_t id)
 	return ret;
 }
 
+uint16_t
+rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
+				 uint8_t port_id,
+				 struct rte_event ev[],
+				 uint16_t nb_events,
+				 const uint8_t flags)
+{
+	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+	if (dev_id >= RTE_EVENT_MAX_DEVS ||
+		!rte_eventdevs[dev_id].attached) {
+		rte_errno = EINVAL;
+		return 0;
+	}
+
+	if (port_id >= dev->data->nb_ports) {
+		rte_errno = EINVAL;
+		return 0;
+	}
+#endif
+	rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
+		nb_events, flags);
+	if (flags)
+		return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
+						  ev, nb_events);
+	else
+		return dev->txa_enqueue(dev->data->ports[port_id], ev,
+					nb_events);
+}
+
 int
 rte_event_eth_tx_adapter_stats_get(uint8_t id,
 				struct rte_event_eth_tx_adapter_stats *stats)
diff --git a/lib/eventdev/rte_event_eth_tx_adapter.h b/lib/eventdev/rte_event_eth_tx_adapter.h
index 8c59547165..3cd65e8a09 100644
--- a/lib/eventdev/rte_event_eth_tx_adapter.h
+++ b/lib/eventdev/rte_event_eth_tx_adapter.h
@@ -79,6 +79,7 @@ extern "C" {
 #include <rte_mbuf.h>
 
 #include "rte_eventdev.h"
+#include "eventdev_pmd.h"
 
 /**
  * Adapter configuration structure
@@ -348,36 +349,12 @@ rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
  *              one or more events. This error code is only applicable to
  *              closed systems.
  */
-static inline uint16_t
+uint16_t
 rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
-				uint8_t port_id,
-				struct rte_event ev[],
-				uint16_t nb_events,
-				const uint8_t flags)
-{
-	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
-#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
-	if (dev_id >= RTE_EVENT_MAX_DEVS ||
-		!rte_eventdevs[dev_id].attached) {
-		rte_errno = EINVAL;
-		return 0;
-	}
-
-	if (port_id >= dev->data->nb_ports) {
-		rte_errno = EINVAL;
-		return 0;
-	}
-#endif
-	rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
-		nb_events, flags);
-	if (flags)
-		return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
-						  ev, nb_events);
-	else
-		return dev->txa_enqueue(dev->data->ports[port_id], ev,
-					nb_events);
-}
+				 uint8_t port_id,
+				 struct rte_event ev[],
+				 uint16_t nb_events,
+				 const uint8_t flags);
 
 /**
  * Retrieve statistics for an adapter
diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
index 594dd5e759..e2dad8a838 100644
--- a/lib/eventdev/rte_eventdev.c
+++ b/lib/eventdev/rte_eventdev.c
@@ -1119,6 +1119,65 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
 	return count;
 }
 
+static __rte_always_inline uint16_t
+__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
+			const struct rte_event ev[], uint16_t nb_events,
+			const event_enqueue_burst_t fn)
+{
+	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+	if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
+		rte_errno = EINVAL;
+		return 0;
+	}
+
+	if (port_id >= dev->data->nb_ports) {
+		rte_errno = EINVAL;
+		return 0;
+	}
+#endif
+	rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
+	/*
+	 * Allow zero cost non burst mode routine invocation if application
+	 * requests nb_events as const one
+	 */
+	if (nb_events == 1)
+		return (*dev->enqueue)(dev->data->ports[port_id], ev);
+	else
+		return fn(dev->data->ports[port_id], ev, nb_events);
+}
+
+uint16_t
+rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
+			const struct rte_event ev[], uint16_t nb_events)
+{
+	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
+					 dev->enqueue_burst);
+}
+
+uint16_t
+rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
+			    const struct rte_event ev[], uint16_t nb_events)
+{
+	const struct rte_eventdev *dev = &rte_event_devices[dev_id];
+
+	return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
+					 dev->enqueue_new_burst);
+}
+
+uint16_t
+rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
+				const struct rte_event ev[], uint16_t nb_events)
+{
+	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
+
+	return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
+			dev->enqueue_forward_burst);
+}
+
 int
 rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 				 uint64_t *timeout_ticks)
@@ -1135,6 +1194,29 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
 	return (*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
 }
 
+uint16_t
+rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
+			uint16_t nb_events, uint64_t timeout_ticks)
+{
+	struct rte_eventdev *dev = &rte_event_devices[dev_id];
+
+#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
+	if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
+		rte_errno = EINVAL;
+		return 0;
+	}
+
+	if (port_id >= dev->data->nb_ports) {
+		rte_errno = EINVAL;
+		return 0;
+	}
+#endif
+	rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
+
+	return (*dev->dequeue_burst)(dev->data->ports[port_id], ev, nb_events,
+				     timeout_ticks);
+}
+
 int
 rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
 {
diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index a9c496fb62..451e9fb0a0 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -1445,38 +1445,6 @@ struct rte_eventdev {
 	void *reserved_ptrs[3];   /**< Reserved for future fields */
 } __rte_cache_aligned;
 
-extern struct rte_eventdev *rte_eventdevs;
-/** @internal The pool of rte_eventdev structures. */
-
-static __rte_always_inline uint16_t
-__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
-			const struct rte_event ev[], uint16_t nb_events,
-			const event_enqueue_burst_t fn)
-{
-	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
-#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
-	if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
-		rte_errno = EINVAL;
-		return 0;
-	}
-
-	if (port_id >= dev->data->nb_ports) {
-		rte_errno = EINVAL;
-		return 0;
-	}
-#endif
-	rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
-	/*
-	 * Allow zero cost non burst mode routine invocation if application
-	 * requests nb_events as const one
-	 */
-	if (nb_events == 1)
-		return (*dev->enqueue)(dev->data->ports[port_id], ev);
-	else
-		return fn(dev->data->ports[port_id], ev, nb_events);
-}
-
 /**
  * Enqueue a burst of events objects or an event object supplied in *rte_event*
  * structure on an  event device designated by its *dev_id* through the event
@@ -1520,15 +1488,9 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
  *              closed systems.
  * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
  */
-static inline uint16_t
+uint16_t
 rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
-			const struct rte_event ev[], uint16_t nb_events)
-{
-	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
-	return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
-			dev->enqueue_burst);
-}
+			const struct rte_event ev[], uint16_t nb_events);
 
 /**
  * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_NEW* on
@@ -1571,15 +1533,9 @@ rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
  * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
  * @see rte_event_enqueue_burst()
  */
-static inline uint16_t
+uint16_t
 rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
-			const struct rte_event ev[], uint16_t nb_events)
-{
-	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
-	return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
-			dev->enqueue_new_burst);
-}
+			    const struct rte_event ev[], uint16_t nb_events);
 
 /**
  * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_FORWARD*
@@ -1622,15 +1578,10 @@ rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
  * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
  * @see rte_event_enqueue_burst()
  */
-static inline uint16_t
+uint16_t
 rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
-			const struct rte_event ev[], uint16_t nb_events)
-{
-	const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
-	return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
-			dev->enqueue_forward_burst);
-}
+				const struct rte_event ev[],
+				uint16_t nb_events);
 
 /**
  * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
@@ -1727,36 +1678,9 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
  *
  * @see rte_event_port_dequeue_depth()
  */
-static inline uint16_t
+uint16_t
 rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
-			uint16_t nb_events, uint64_t timeout_ticks)
-{
-	struct rte_eventdev *dev = &rte_eventdevs[dev_id];
-
-#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
-	if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
-		rte_errno = EINVAL;
-		return 0;
-	}
-
-	if (port_id >= dev->data->nb_ports) {
-		rte_errno = EINVAL;
-		return 0;
-	}
-#endif
-	rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
-	/*
-	 * Allow zero cost non burst mode routine invocation if application
-	 * requests nb_events as const one
-	 */
-	if (nb_events == 1)
-		return (*dev->dequeue)(
-			dev->data->ports[port_id], ev, timeout_ticks);
-	else
-		return (*dev->dequeue_burst)(
-			dev->data->ports[port_id], ev, nb_events,
-				timeout_ticks);
-}
+			uint16_t nb_events, uint64_t timeout_ticks);
 
 /**
  * Link multiple source event queues supplied in *queues* to the destination
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index 88625621ec..8da79cbdc0 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -13,7 +13,11 @@ DPDK_22 {
 	rte_event_crypto_adapter_stats_get;
 	rte_event_crypto_adapter_stats_reset;
 	rte_event_crypto_adapter_stop;
+	rte_event_enqueue_burst;
+	rte_event_enqueue_new_burst;
+	rte_event_enqueue_forward_burst;
 	rte_event_dequeue_timeout_ticks;
+	rte_event_dequeue_burst;
 	rte_event_dev_attr_get;
 	rte_event_dev_close;
 	rte_event_dev_configure;
-- 
2.17.1


^ permalink raw reply	[relevance 2%]

* [dpdk-dev] [PATCH v3] ethdev: add namespace
  2021-08-27  1:19  1% ` [dpdk-dev] [PATCH v2] " Ferruh Yigit
@ 2021-08-30 17:19  1%   ` Ferruh Yigit
  0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2021-08-30 17:19 UTC (permalink / raw)
  To: Maryam Tahhan, Reshma Pattan, Jerin Jacob, Wisam Jaddo,
	Cristian Dumitrescu, Xiaoyun Li, Thomas Monjalon,
	Andrew Rybchenko, Jay Jayatheerthan, Chas Williams,
	Min Hu (Connor),
	Pavan Nikhilesh, Shijith Thotton, Ajit Khaparde, Somnath Kotur,
	John Daley, Hyong Youb Kim, Qi Zhang, Xiao Wang, Haiyue Wang,
	Beilei Xing, Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko,
	Keith Wiles, Jiayu Hu, Olivier Matz, Ori Kam, Akhil Goyal,
	Declan Doherty, Ray Kinsella, Radu Nicolau, Hemant Agrawal,
	Sachin Saxena, Nithin Dabilpuram, Kiran Kumar K,
	Sunil Kumar Kori, Satha Rao, John W. Linville, Ciara Loftus,
	Shepard Siegel, Ed Czeck, John Miller, Igor Russkikh,
	Steven Webster, Matt Peters, Somalapuram Amaranath, Rasesh Mody,
	Shahed Shaikh, Bruce Richardson, Konstantin Ananyev,
	Ruifeng Wang, Rahul Lakkireddy, Marcin Wojtas, Michal Krawczyk,
	Shai Brandes, Evgeny Schemeilin, Igor Chauskin, Gagandeep Singh,
	Gaetan Rivet, Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou,
	Yisen Zhuang, Lijun Ou, Jingjing Wu, Qiming Yang, Andrew Boyer,
	Rosen Xu, Srisivasubramanian Srinivasan, Jakub Grajciar,
	Zyta Szpak, Liron Himi, Stephen Hemminger, Long Li,
	Martin Spinler, Heinrich Kuhn, Jiawen Wu, Tetsuya Mukawa,
	Harman Kalra, Anoob Joseph, Nalla Pradeep,
	Radha Mohan Chintakuntla, Veerasenareddy Burru,
	Devendra Singh Rawat, Jasvinder Singh, Maciej Czekaj, Jian Wang,
	Maxime Coquelin, Chenbo Xia, Yong Wang, Nicolas Chautru,
	David Hunt, Harry van Haaren, Bernard Iremonger, Anatoly Burakov,
	John McNamara, Kirill Rybalchenko, Byron Marohn, Yipeng Wang
  Cc: Ferruh Yigit, dev, Tyler Retzlaff, David Marchand

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
Cc: David Marchand <david.marchand@redhat.com>

v2:
* Updated internal components
* Removed deprecation notice

v3:
* Updated missing macros / structs that David highlighted
* Added release notes update
---
 app/proc-info/main.c                          |   8 +-
 app/test-eventdev/test_perf_common.c          |   4 +-
 app/test-eventdev/test_pipeline_common.c      |  12 +-
 app/test-flow-perf/config.h                   |   2 +-
 app/test-pipeline/init.c                      |   8 +-
 app/test-pmd/cmdline.c                        | 298 +++---
 app/test-pmd/config.c                         | 202 ++--
 app/test-pmd/csumonly.c                       |  28 +-
 app/test-pmd/flowgen.c                        |   6 +-
 app/test-pmd/macfwd.c                         |   6 +-
 app/test-pmd/macswap_common.h                 |   6 +-
 app/test-pmd/parameters.c                     |  54 +-
 app/test-pmd/testpmd.c                        |  60 +-
 app/test-pmd/testpmd.h                        |   2 +-
 app/test-pmd/txonly.c                         |   6 +-
 app/test/test_ethdev_link.c                   |  68 +-
 app/test/test_event_eth_rx_adapter.c          |   4 +-
 app/test/test_kni.c                           |   2 +-
 app/test/test_link_bonding.c                  |   4 +-
 app/test/test_link_bonding_mode4.c            |   4 +-
 app/test/test_link_bonding_rssconf.c          |  28 +-
 app/test/test_pmd_perf.c                      |  12 +-
 app/test/virtual_pmd.c                        |  10 +-
 doc/guides/eventdevs/cnxk.rst                 |   2 +-
 doc/guides/eventdevs/octeontx2.rst            |   2 +-
 doc/guides/howto/debug_troubleshoot.rst       |   2 +-
 doc/guides/nics/bnxt.rst                      |  26 +-
 doc/guides/nics/enic.rst                      |   2 +-
 doc/guides/nics/features.rst                  | 116 +-
 doc/guides/nics/fm10k.rst                     |   6 +-
 doc/guides/nics/intel_vf.rst                  |  10 +-
 doc/guides/nics/ixgbe.rst                     |  12 +-
 doc/guides/nics/mlx5.rst                      |   4 +-
 doc/guides/nics/tap.rst                       |   2 +-
 .../generic_segmentation_offload_lib.rst      |   8 +-
 doc/guides/prog_guide/mbuf_lib.rst            |  18 +-
 doc/guides/prog_guide/poll_mode_drv.rst       |   8 +-
 doc/guides/prog_guide/rte_flow.rst            |  34 +-
 doc/guides/prog_guide/rte_security.rst        |   2 +-
 doc/guides/rel_notes/deprecation.rst          |  12 +-
 doc/guides/rel_notes/release_21_11.rst        |   3 +
 doc/guides/sample_app_ug/ipsec_secgw.rst      |   4 +-
 doc/guides/testpmd_app_ug/run_app.rst         |   2 +-
 drivers/bus/dpaa/include/process.h            |  16 +-
 drivers/common/cnxk/roc_npc.h                 |   2 +-
 drivers/net/af_packet/rte_eth_af_packet.c     |  16 +-
 drivers/net/af_xdp/rte_eth_af_xdp.c           |  12 +-
 drivers/net/ark/ark_ethdev.c                  |  16 +-
 drivers/net/atlantic/atl_ethdev.c             |  90 +-
 drivers/net/atlantic/atl_ethdev.h             |  18 +-
 drivers/net/atlantic/atl_rxtx.c               |   6 +-
 drivers/net/avp/avp_ethdev.c                  |  26 +-
 drivers/net/axgbe/axgbe_dev.c                 |   6 +-
 drivers/net/axgbe/axgbe_ethdev.c              | 110 +-
 drivers/net/axgbe/axgbe_ethdev.h              |  12 +-
 drivers/net/axgbe/axgbe_mdio.c                |   2 +-
 drivers/net/axgbe/axgbe_rxtx.c                |   6 +-
 drivers/net/bnx2x/bnx2x_ethdev.c              |  16 +-
 drivers/net/bnxt/bnxt.h                       |  68 +-
 drivers/net/bnxt/bnxt_ethdev.c                | 178 ++--
 drivers/net/bnxt/bnxt_flow.c                  |   4 +-
 drivers/net/bnxt/bnxt_hwrm.c                  | 112 +-
 drivers/net/bnxt/bnxt_reps.c                  |   2 +-
 drivers/net/bnxt/bnxt_ring.c                  |   4 +-
 drivers/net/bnxt/bnxt_rxq.c                   |  28 +-
 drivers/net/bnxt/bnxt_rxr.c                   |   4 +-
 drivers/net/bnxt/bnxt_rxtx_vec_avx2.c         |   2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_common.h       |   2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_neon.c         |   2 +-
 drivers/net/bnxt/bnxt_rxtx_vec_sse.c          |   2 +-
 drivers/net/bnxt/bnxt_txr.c                   |   4 +-
 drivers/net/bnxt/bnxt_vnic.c                  |  30 +-
 drivers/net/bnxt/rte_pmd_bnxt.c               |   8 +-
 drivers/net/bonding/eth_bond_private.h        |   4 +-
 drivers/net/bonding/rte_eth_bond_8023ad.c     |  16 +-
 drivers/net/bonding/rte_eth_bond_api.c        |   6 +-
 drivers/net/bonding/rte_eth_bond_pmd.c        |  56 +-
 drivers/net/cnxk/cn10k_ethdev.c               |  38 +-
 drivers/net/cnxk/cn10k_rx.c                   |   4 +-
 drivers/net/cnxk/cn10k_tx.c                   |   4 +-
 drivers/net/cnxk/cn9k_ethdev.c                |  56 +-
 drivers/net/cnxk/cn9k_rx.c                    |   4 +-
 drivers/net/cnxk/cn9k_tx.c                    |   4 +-
 drivers/net/cnxk/cnxk_ethdev.c                |  84 +-
 drivers/net/cnxk/cnxk_ethdev.h                |  49 +-
 drivers/net/cnxk/cnxk_ethdev_devargs.c        |   6 +-
 drivers/net/cnxk/cnxk_ethdev_ops.c            | 112 +-
 drivers/net/cnxk/cnxk_link.c                  |  14 +-
 drivers/net/cnxk/cnxk_ptp.c                   |   4 +-
 drivers/net/cnxk/cnxk_rte_flow.c              |   2 +-
 drivers/net/cxgbe/cxgbe.h                     |  48 +-
 drivers/net/cxgbe/cxgbe_ethdev.c              |  50 +-
 drivers/net/cxgbe/cxgbe_main.c                |  12 +-
 drivers/net/cxgbe/sge.c                       |   2 +-
 drivers/net/dpaa/dpaa_ethdev.c                | 190 ++--
 drivers/net/dpaa/dpaa_ethdev.h                |  10 +-
 drivers/net/dpaa/dpaa_flow.c                  |  32 +-
 drivers/net/dpaa2/base/dpaa2_hw_dpni.c        |  34 +-
 drivers/net/dpaa2/dpaa2_ethdev.c              | 148 +--
 drivers/net/dpaa2/dpaa2_ethdev.h              |  12 +-
 drivers/net/dpaa2/dpaa2_rxtx.c                |   8 +-
 drivers/net/e1000/e1000_ethdev.h              |  18 +-
 drivers/net/e1000/em_ethdev.c                 |  68 +-
 drivers/net/e1000/em_rxtx.c                   |  48 +-
 drivers/net/e1000/igb_ethdev.c                | 166 +--
 drivers/net/e1000/igb_pf.c                    |   2 +-
 drivers/net/e1000/igb_rxtx.c                  | 120 +--
 drivers/net/ena/ena_ethdev.c                  |  70 +-
 drivers/net/ena/ena_ethdev.h                  |   4 +-
 drivers/net/ena/ena_rss.c                     |  76 +-
 drivers/net/enetc/enetc_ethdev.c              |  38 +-
 drivers/net/enic/enic.h                       |   2 +-
 drivers/net/enic/enic_ethdev.c                |  88 +-
 drivers/net/enic/enic_main.c                  |  40 +-
 drivers/net/enic/enic_res.c                   |  52 +-
 drivers/net/failsafe/failsafe.c               |   8 +-
 drivers/net/failsafe/failsafe_intr.c          |   4 +-
 drivers/net/failsafe/failsafe_ops.c           |  82 +-
 drivers/net/fm10k/fm10k.h                     |   4 +-
 drivers/net/fm10k/fm10k_ethdev.c              | 148 +--
 drivers/net/fm10k/fm10k_rxtx_vec.c            |   6 +-
 drivers/net/hinic/base/hinic_pmd_hwdev.c      |  22 +-
 drivers/net/hinic/hinic_pmd_ethdev.c          | 142 +--
 drivers/net/hinic/hinic_pmd_rx.c              |  36 +-
 drivers/net/hinic/hinic_pmd_rx.h              |  22 +-
 drivers/net/hns3/hns3_dcb.c                   |  14 +-
 drivers/net/hns3/hns3_ethdev.c                | 360 +++----
 drivers/net/hns3/hns3_ethdev.h                |  12 +-
 drivers/net/hns3/hns3_ethdev_vf.c             | 108 +-
 drivers/net/hns3/hns3_flow.c                  |   6 +-
 drivers/net/hns3/hns3_ptp.c                   |   2 +-
 drivers/net/hns3/hns3_rss.c                   | 108 +-
 drivers/net/hns3/hns3_rss.h                   |  28 +-
 drivers/net/hns3/hns3_rxtx.c                  |  30 +-
 drivers/net/hns3/hns3_rxtx.h                  |   2 +-
 drivers/net/hns3/hns3_rxtx_vec.c              |  10 +-
 drivers/net/i40e/i40e_ethdev.c                | 278 ++---
 drivers/net/i40e/i40e_ethdev.h                |  24 +-
 drivers/net/i40e/i40e_ethdev_vf.c             | 118 +--
 drivers/net/i40e/i40e_flow.c                  |   2 +-
 drivers/net/i40e/i40e_hash.c                  | 156 +--
 drivers/net/i40e/i40e_pf.c                    |  14 +-
 drivers/net/i40e/i40e_rxtx.c                  |  10 +-
 drivers/net/i40e/i40e_rxtx.h                  |   4 +-
 drivers/net/i40e/i40e_rxtx_vec_avx512.c       |   2 +-
 drivers/net/i40e/i40e_rxtx_vec_common.h       |   8 +-
 drivers/net/i40e/i40e_vf_representor.c        |  48 +-
 drivers/net/iavf/iavf.h                       |  24 +-
 drivers/net/iavf/iavf_ethdev.c                | 186 ++--
 drivers/net/iavf/iavf_hash.c                  | 300 +++---
 drivers/net/iavf/iavf_rxtx.c                  |   2 +-
 drivers/net/iavf/iavf_rxtx.h                  |  24 +-
 drivers/net/iavf/iavf_rxtx_vec_avx2.c         |   4 +-
 drivers/net/iavf/iavf_rxtx_vec_avx512.c       |   6 +-
 drivers/net/iavf/iavf_rxtx_vec_sse.c          |   2 +-
 drivers/net/ice/ice_dcf.c                     |   2 +-
 drivers/net/ice/ice_dcf_ethdev.c              |  90 +-
 drivers/net/ice/ice_dcf_vf_representor.c      |  58 +-
 drivers/net/ice/ice_ethdev.c                  | 190 ++--
 drivers/net/ice/ice_ethdev.h                  |  26 +-
 drivers/net/ice/ice_hash.c                    | 268 ++---
 drivers/net/ice/ice_rxtx.c                    |   8 +-
 drivers/net/ice/ice_rxtx_vec_avx2.c           |   2 +-
 drivers/net/ice/ice_rxtx_vec_avx512.c         |   4 +-
 drivers/net/ice/ice_rxtx_vec_common.h         |  26 +-
 drivers/net/ice/ice_rxtx_vec_sse.c            |   2 +-
 drivers/net/igc/igc_ethdev.c                  | 146 +--
 drivers/net/igc/igc_ethdev.h                  |  56 +-
 drivers/net/igc/igc_txrx.c                    |  50 +-
 drivers/net/ionic/ionic_ethdev.c              | 140 +--
 drivers/net/ionic/ionic_ethdev.h              |  12 +-
 drivers/net/ionic/ionic_lif.c                 |  36 +-
 drivers/net/ionic/ionic_rxtx.c                |  10 +-
 drivers/net/ipn3ke/ipn3ke_representor.c       |  70 +-
 drivers/net/ixgbe/ixgbe_ethdev.c              | 313 +++---
 drivers/net/ixgbe/ixgbe_ethdev.h              |  18 +-
 drivers/net/ixgbe/ixgbe_fdir.c                |  24 +-
 drivers/net/ixgbe/ixgbe_flow.c                |   2 +-
 drivers/net/ixgbe/ixgbe_ipsec.c               |  12 +-
 drivers/net/ixgbe/ixgbe_pf.c                  |  38 +-
 drivers/net/ixgbe/ixgbe_rxtx.c                | 253 +++--
 drivers/net/ixgbe/ixgbe_rxtx.h                |   4 +-
 drivers/net/ixgbe/ixgbe_rxtx_vec_common.h     |   2 +-
 drivers/net/ixgbe/ixgbe_tm.c                  |  16 +-
 drivers/net/ixgbe/ixgbe_vf_representor.c      |  16 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.c             |  14 +-
 drivers/net/ixgbe/rte_pmd_ixgbe.h             |   4 +-
 drivers/net/kni/rte_eth_kni.c                 |   8 +-
 drivers/net/liquidio/lio_ethdev.c             | 118 +--
 drivers/net/memif/memif_socket.c              |   2 +-
 drivers/net/memif/rte_eth_memif.c             |  14 +-
 drivers/net/mlx4/mlx4_ethdev.c                |  32 +-
 drivers/net/mlx4/mlx4_flow.c                  |  30 +-
 drivers/net/mlx4/mlx4_intr.c                  |   8 +-
 drivers/net/mlx4/mlx4_rxq.c                   |  20 +-
 drivers/net/mlx4/mlx4_txq.c                   |  24 +-
 drivers/net/mlx5/linux/mlx5_ethdev_os.c       |  54 +-
 drivers/net/mlx5/linux/mlx5_os.c              |   6 +-
 drivers/net/mlx5/mlx5.c                       |   4 +-
 drivers/net/mlx5/mlx5.h                       |   2 +-
 drivers/net/mlx5/mlx5_defs.h                  |   6 +-
 drivers/net/mlx5/mlx5_ethdev.c                |   6 +-
 drivers/net/mlx5/mlx5_flow.c                  |  54 +-
 drivers/net/mlx5/mlx5_flow.h                  |  12 +-
 drivers/net/mlx5/mlx5_flow_dv.c               |  44 +-
 drivers/net/mlx5/mlx5_flow_verbs.c            |   4 +-
 drivers/net/mlx5/mlx5_rss.c                   |  10 +-
 drivers/net/mlx5/mlx5_rxq.c                   |  42 +-
 drivers/net/mlx5/mlx5_rxtx_vec.h              |   8 +-
 drivers/net/mlx5/mlx5_tx.c                    |  30 +-
 drivers/net/mlx5/mlx5_txq.c                   |  52 +-
 drivers/net/mlx5/mlx5_vlan.c                  |   4 +-
 drivers/net/mlx5/windows/mlx5_os.c            |   4 +-
 drivers/net/mvneta/mvneta_ethdev.c            |  34 +-
 drivers/net/mvneta/mvneta_ethdev.h            |  12 +-
 drivers/net/mvneta/mvneta_rxtx.c              |   2 +-
 drivers/net/mvpp2/mrvl_ethdev.c               | 116 +-
 drivers/net/netvsc/hn_ethdev.c                |  70 +-
 drivers/net/netvsc/hn_rndis.c                 |  50 +-
 drivers/net/nfb/nfb_ethdev.c                  |  20 +-
 drivers/net/nfb/nfb_rx.c                      |   2 +-
 drivers/net/nfp/nfp_common.c                  | 130 +--
 drivers/net/nfp/nfp_ethdev.c                  |   2 +-
 drivers/net/nfp/nfp_ethdev_vf.c               |   2 +-
 drivers/net/ngbe/ngbe_ethdev.c                |  50 +-
 drivers/net/null/rte_eth_null.c               |  28 +-
 drivers/net/octeontx/octeontx_ethdev.c        |  78 +-
 drivers/net/octeontx/octeontx_ethdev.h        |  32 +-
 drivers/net/octeontx/octeontx_ethdev_ops.c    |  26 +-
 drivers/net/octeontx2/otx2_ethdev.c           |  96 +-
 drivers/net/octeontx2/otx2_ethdev.h           |  66 +-
 drivers/net/octeontx2/otx2_ethdev_devargs.c   |  12 +-
 drivers/net/octeontx2/otx2_ethdev_ops.c       |  18 +-
 drivers/net/octeontx2/otx2_ethdev_sec.c       |   8 +-
 drivers/net/octeontx2/otx2_flow.c             |   2 +-
 drivers/net/octeontx2/otx2_flow_ctrl.c        |  36 +-
 drivers/net/octeontx2/otx2_flow_parse.c       |   4 +-
 drivers/net/octeontx2/otx2_link.c             |  40 +-
 drivers/net/octeontx2/otx2_mcast.c            |   2 +-
 drivers/net/octeontx2/otx2_ptp.c              |   4 +-
 drivers/net/octeontx2/otx2_rss.c              |  70 +-
 drivers/net/octeontx2/otx2_rx.c               |   4 +-
 drivers/net/octeontx2/otx2_tx.c               |   2 +-
 drivers/net/octeontx2/otx2_vlan.c             |  42 +-
 drivers/net/octeontx_ep/otx_ep_ethdev.c       |   8 +-
 drivers/net/octeontx_ep/otx_ep_rxtx.c         |   8 +-
 drivers/net/pcap/pcap_ethdev.c                |  12 +-
 drivers/net/pfe/pfe_ethdev.c                  |  18 +-
 drivers/net/qede/base/mcp_public.h            |   4 +-
 drivers/net/qede/qede_ethdev.c                | 152 +--
 drivers/net/qede/qede_filter.c                |  10 +-
 drivers/net/qede/qede_rxtx.c                  |   2 +-
 drivers/net/qede/qede_rxtx.h                  |  16 +-
 drivers/net/ring/rte_eth_ring.c               |  20 +-
 drivers/net/sfc/sfc.c                         |  30 +-
 drivers/net/sfc/sfc_ef100_rx.c                |  10 +-
 drivers/net/sfc/sfc_ef100_tx.c                |  20 +-
 drivers/net/sfc/sfc_ef10_essb_rx.c            |   4 +-
 drivers/net/sfc/sfc_ef10_rx.c                 |   8 +-
 drivers/net/sfc/sfc_ef10_tx.c                 |  32 +-
 drivers/net/sfc/sfc_ethdev.c                  |  52 +-
 drivers/net/sfc/sfc_flow.c                    |   2 +-
 drivers/net/sfc/sfc_port.c                    |  54 +-
 drivers/net/sfc/sfc_rx.c                      |  52 +-
 drivers/net/sfc/sfc_tx.c                      |  50 +-
 drivers/net/softnic/rte_eth_softnic.c         |  12 +-
 drivers/net/szedata2/rte_eth_szedata2.c       |  14 +-
 drivers/net/tap/rte_eth_tap.c                 | 104 +-
 drivers/net/tap/tap_rss.h                     |   2 +-
 drivers/net/thunderx/nicvf_ethdev.c           | 108 +-
 drivers/net/thunderx/nicvf_ethdev.h           |  42 +-
 drivers/net/txgbe/txgbe_ethdev.c              | 244 ++---
 drivers/net/txgbe/txgbe_ethdev.h              |  18 +-
 drivers/net/txgbe/txgbe_ethdev_vf.c           |  24 +-
 drivers/net/txgbe/txgbe_fdir.c                |  20 +-
 drivers/net/txgbe/txgbe_flow.c                |   2 +-
 drivers/net/txgbe/txgbe_ipsec.c               |  12 +-
 drivers/net/txgbe/txgbe_pf.c                  |  34 +-
 drivers/net/txgbe/txgbe_rxtx.c                | 312 +++---
 drivers/net/txgbe/txgbe_rxtx.h                |   4 +-
 drivers/net/txgbe/txgbe_tm.c                  |  16 +-
 drivers/net/vhost/rte_eth_vhost.c             |  16 +-
 drivers/net/virtio/virtio_ethdev.c            | 126 +--
 drivers/net/vmxnet3/vmxnet3_ethdev.c          |  74 +-
 drivers/net/vmxnet3/vmxnet3_ethdev.h          |  16 +-
 drivers/net/vmxnet3/vmxnet3_rxtx.c            |  16 +-
 examples/bbdev_app/main.c                     |   6 +-
 examples/bond/main.c                          |  14 +-
 examples/distributor/main.c                   |  12 +-
 examples/ethtool/ethtool-app/main.c           |   2 +-
 examples/ethtool/lib/rte_ethtool.c            |  18 +-
 .../pipeline_worker_generic.c                 |  16 +-
 .../eventdev_pipeline/pipeline_worker_tx.c    |  12 +-
 examples/flow_classify/flow_classify.c        |   4 +-
 examples/flow_filtering/main.c                |  16 +-
 examples/ioat/ioatfwd.c                       |   8 +-
 examples/ip_fragmentation/main.c              |  14 +-
 examples/ip_pipeline/link.c                   |  20 +-
 examples/ip_reassembly/main.c                 |  20 +-
 examples/ipsec-secgw/ipsec-secgw.c            |  34 +-
 examples/ipsec-secgw/sa.c                     |   8 +-
 examples/ipv4_multicast/main.c                |   8 +-
 examples/kni/main.c                           |  12 +-
 examples/l2fwd-crypto/main.c                  |  10 +-
 examples/l2fwd-event/l2fwd_common.c           |  10 +-
 examples/l2fwd-event/main.c                   |   2 +-
 examples/l2fwd-jobstats/main.c                |   8 +-
 examples/l2fwd-keepalive/main.c               |   8 +-
 examples/l2fwd/main.c                         |   8 +-
 examples/l3fwd-acl/main.c                     |  20 +-
 examples/l3fwd-graph/main.c                   |  16 +-
 examples/l3fwd-power/main.c                   |  18 +-
 examples/l3fwd/l3fwd_event.c                  |   4 +-
 examples/l3fwd/main.c                         |  20 +-
 examples/link_status_interrupt/main.c         |  10 +-
 .../client_server_mp/mp_server/init.c         |   4 +-
 examples/multi_process/symmetric_mp/main.c    |  14 +-
 examples/ntb/ntb_fwd.c                        |   6 +-
 examples/packet_ordering/main.c               |   4 +-
 .../performance-thread/l3fwd-thread/main.c    |  18 +-
 examples/pipeline/obj.c                       |  20 +-
 examples/ptpclient/ptpclient.c                |  10 +-
 examples/qos_meter/main.c                     |  16 +-
 examples/qos_sched/init.c                     |   6 +-
 examples/rxtx_callbacks/main.c                |   8 +-
 examples/server_node_efd/server/init.c        |   8 +-
 examples/skeleton/basicfwd.c                  |   4 +-
 examples/vhost/main.c                         |  28 +-
 examples/vm_power_manager/main.c              |   6 +-
 examples/vmdq/main.c                          |  20 +-
 examples/vmdq_dcb/main.c                      |  40 +-
 lib/ethdev/rte_ethdev.c                       | 193 ++--
 lib/ethdev/rte_ethdev.h                       | 997 +++++++++++-------
 lib/ethdev/rte_ethdev_core.h                  |   2 +-
 lib/ethdev/rte_flow.h                         |   2 +-
 lib/gso/rte_gso.c                             |  20 +-
 lib/gso/rte_gso.h                             |   4 +-
 lib/mbuf/rte_mbuf_core.h                      |   8 +-
 lib/mbuf/rte_mbuf_dyn.h                       |   2 +-
 339 files changed, 6728 insertions(+), 6500 deletions(-)

diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index a8e928fa9ff3..963b6aa5c589 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -757,11 +757,11 @@ show_port(void)
 		}
 
 		ret = rte_eth_dev_flow_ctrl_get(i, &fc_conf);
-		if (ret == 0 && fc_conf.mode != RTE_FC_NONE)  {
+		if (ret == 0 && fc_conf.mode != RTE_ETH_FC_NONE)  {
 			printf("\t  -- flow control mode %s%s high %u low %u pause %u%s%s\n",
-			       fc_conf.mode == RTE_FC_RX_PAUSE ? "rx " :
-			       fc_conf.mode == RTE_FC_TX_PAUSE ? "tx " :
-			       fc_conf.mode == RTE_FC_FULL ? "full" : "???",
+			       fc_conf.mode == RTE_ETH_FC_RX_PAUSE ? "rx " :
+			       fc_conf.mode == RTE_ETH_FC_TX_PAUSE ? "tx " :
+			       fc_conf.mode == RTE_ETH_FC_FULL ? "full" : "???",
 			       fc_conf.autoneg ? " auto" : "",
 			       fc_conf.high_water,
 			       fc_conf.low_water,
diff --git a/app/test-eventdev/test_perf_common.c b/app/test-eventdev/test_perf_common.c
index cc100650c21e..41e92143121b 100644
--- a/app/test-eventdev/test_perf_common.c
+++ b/app/test-eventdev/test_perf_common.c
@@ -668,14 +668,14 @@ perf_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct test_perf *t = evt_test_priv(test);
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 			.split_hdr_size = 0,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
diff --git a/app/test-eventdev/test_pipeline_common.c b/app/test-eventdev/test_pipeline_common.c
index 6ee530d4cdc9..96c8a5828364 100644
--- a/app/test-eventdev/test_pipeline_common.c
+++ b/app/test-eventdev/test_pipeline_common.c
@@ -176,12 +176,12 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 	struct rte_eth_rxconf rx_conf;
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 			},
 		},
 	};
@@ -199,7 +199,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 
 	port_conf.rxmode.max_rx_pkt_len = opt->max_pkt_sz;
 	if (opt->max_pkt_sz > RTE_ETHER_MAX_LEN)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	t->internal_port = 1;
 	RTE_ETH_FOREACH_DEV(i) {
@@ -224,7 +224,7 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 
 		if (!(caps & RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT))
 			local_port_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_RSS_HASH;
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 		ret = rte_eth_dev_info_get(i, &dev_info);
 		if (ret != 0) {
@@ -234,9 +234,9 @@ pipeline_ethdev_setup(struct evt_test *test, struct evt_options *opt)
 		}
 
 		/* Enable mbuf fast free if PMD has the capability. */
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		rx_conf = dev_info.default_rxconf;
 		rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/app/test-flow-perf/config.h b/app/test-flow-perf/config.h
index a14d4e05e185..4249b6175b82 100644
--- a/app/test-flow-perf/config.h
+++ b/app/test-flow-perf/config.h
@@ -5,7 +5,7 @@
 #define FLOW_ITEM_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ACTION_MASK(_x) (UINT64_C(1) << _x)
 #define FLOW_ATTR_MASK(_x) (UINT64_C(1) << _x)
-#define GET_RSS_HF() (ETH_RSS_IP)
+#define GET_RSS_HF() (RTE_ETH_RSS_IP)
 
 /* Configuration */
 #define RXQ_NUM 4
diff --git a/app/test-pipeline/init.c b/app/test-pipeline/init.c
index fe37d63730c6..c73801904103 100644
--- a/app/test-pipeline/init.c
+++ b/app/test-pipeline/init.c
@@ -70,16 +70,16 @@ struct app_params app = {
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -178,7 +178,7 @@ app_ports_check_link(void)
 		RTE_LOG(INFO, USER1, "Port %u %s\n",
 			port,
 			link_status_text);
-		if (link.link_status == ETH_LINK_DOWN)
+		if (link.link_status == RTE_ETH_LINK_DOWN)
 			all_ports_up = 0;
 	}
 
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 82253bc75110..9ff9847dffa0 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1490,51 +1490,51 @@ parse_and_check_speed_duplex(char *speedstr, char *duplexstr, uint32_t *speed)
 	int duplex;
 
 	if (!strcmp(duplexstr, "half")) {
-		duplex = ETH_LINK_HALF_DUPLEX;
+		duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	} else if (!strcmp(duplexstr, "full")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else if (!strcmp(duplexstr, "auto")) {
-		duplex = ETH_LINK_FULL_DUPLEX;
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		fprintf(stderr, "Unknown duplex parameter\n");
 		return -1;
 	}
 
 	if (!strcmp(speedstr, "10")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_10M_HD : ETH_LINK_SPEED_10M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_10M_HD : RTE_ETH_LINK_SPEED_10M;
 	} else if (!strcmp(speedstr, "100")) {
-		*speed = (duplex == ETH_LINK_HALF_DUPLEX) ?
-				ETH_LINK_SPEED_100M_HD : ETH_LINK_SPEED_100M;
+		*speed = (duplex == RTE_ETH_LINK_HALF_DUPLEX) ?
+				RTE_ETH_LINK_SPEED_100M_HD : RTE_ETH_LINK_SPEED_100M;
 	} else {
-		if (duplex != ETH_LINK_FULL_DUPLEX) {
+		if (duplex != RTE_ETH_LINK_FULL_DUPLEX) {
 			fprintf(stderr, "Invalid speed/duplex parameters\n");
 			return -1;
 		}
 		if (!strcmp(speedstr, "1000")) {
-			*speed = ETH_LINK_SPEED_1G;
+			*speed = RTE_ETH_LINK_SPEED_1G;
 		} else if (!strcmp(speedstr, "10000")) {
-			*speed = ETH_LINK_SPEED_10G;
+			*speed = RTE_ETH_LINK_SPEED_10G;
 		} else if (!strcmp(speedstr, "25000")) {
-			*speed = ETH_LINK_SPEED_25G;
+			*speed = RTE_ETH_LINK_SPEED_25G;
 		} else if (!strcmp(speedstr, "40000")) {
-			*speed = ETH_LINK_SPEED_40G;
+			*speed = RTE_ETH_LINK_SPEED_40G;
 		} else if (!strcmp(speedstr, "50000")) {
-			*speed = ETH_LINK_SPEED_50G;
+			*speed = RTE_ETH_LINK_SPEED_50G;
 		} else if (!strcmp(speedstr, "100000")) {
-			*speed = ETH_LINK_SPEED_100G;
+			*speed = RTE_ETH_LINK_SPEED_100G;
 		} else if (!strcmp(speedstr, "200000")) {
-			*speed = ETH_LINK_SPEED_200G;
+			*speed = RTE_ETH_LINK_SPEED_200G;
 		} else if (!strcmp(speedstr, "auto")) {
-			*speed = ETH_LINK_SPEED_AUTONEG;
+			*speed = RTE_ETH_LINK_SPEED_AUTONEG;
 		} else {
 			fprintf(stderr, "Unknown speed parameter\n");
 			return -1;
 		}
 	}
 
-	if (*speed != ETH_LINK_SPEED_AUTONEG)
-		*speed |= ETH_LINK_SPEED_FIXED;
+	if (*speed != RTE_ETH_LINK_SPEED_AUTONEG)
+		*speed |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return 0;
 }
@@ -2185,33 +2185,33 @@ cmd_config_rss_parsed(void *parsed_result,
 	int ret;
 
 	if (!strcmp(res->value, "all"))
-		rss_conf.rss_hf = ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP |
-			ETH_RSS_TCP | ETH_RSS_UDP | ETH_RSS_SCTP |
-			ETH_RSS_L2_PAYLOAD | ETH_RSS_L2TPV3 | ETH_RSS_ESP |
-			ETH_RSS_AH | ETH_RSS_PFCP | ETH_RSS_GTPU |
-			ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP |
+			RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP |
+			RTE_ETH_RSS_L2_PAYLOAD | RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP |
+			RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP | RTE_ETH_RSS_GTPU |
+			RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "eth"))
-		rss_conf.rss_hf = ETH_RSS_ETH;
+		rss_conf.rss_hf = RTE_ETH_RSS_ETH;
 	else if (!strcmp(res->value, "vlan"))
-		rss_conf.rss_hf = ETH_RSS_VLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VLAN;
 	else if (!strcmp(res->value, "ip"))
-		rss_conf.rss_hf = ETH_RSS_IP;
+		rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	else if (!strcmp(res->value, "udp"))
-		rss_conf.rss_hf = ETH_RSS_UDP;
+		rss_conf.rss_hf = RTE_ETH_RSS_UDP;
 	else if (!strcmp(res->value, "tcp"))
-		rss_conf.rss_hf = ETH_RSS_TCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_TCP;
 	else if (!strcmp(res->value, "sctp"))
-		rss_conf.rss_hf = ETH_RSS_SCTP;
+		rss_conf.rss_hf = RTE_ETH_RSS_SCTP;
 	else if (!strcmp(res->value, "ether"))
-		rss_conf.rss_hf = ETH_RSS_L2_PAYLOAD;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_PAYLOAD;
 	else if (!strcmp(res->value, "port"))
-		rss_conf.rss_hf = ETH_RSS_PORT;
+		rss_conf.rss_hf = RTE_ETH_RSS_PORT;
 	else if (!strcmp(res->value, "vxlan"))
-		rss_conf.rss_hf = ETH_RSS_VXLAN;
+		rss_conf.rss_hf = RTE_ETH_RSS_VXLAN;
 	else if (!strcmp(res->value, "geneve"))
-		rss_conf.rss_hf = ETH_RSS_GENEVE;
+		rss_conf.rss_hf = RTE_ETH_RSS_GENEVE;
 	else if (!strcmp(res->value, "nvgre"))
-		rss_conf.rss_hf = ETH_RSS_NVGRE;
+		rss_conf.rss_hf = RTE_ETH_RSS_NVGRE;
 	else if (!strcmp(res->value, "l3-pre32"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE32;
 	else if (!strcmp(res->value, "l3-pre40"))
@@ -2225,44 +2225,44 @@ cmd_config_rss_parsed(void *parsed_result,
 	else if (!strcmp(res->value, "l3-pre96"))
 		rss_conf.rss_hf = RTE_ETH_RSS_L3_PRE96;
 	else if (!strcmp(res->value, "l3-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_SRC_ONLY;
 	else if (!strcmp(res->value, "l3-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L3_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L3_DST_ONLY;
 	else if (!strcmp(res->value, "l4-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_SRC_ONLY;
 	else if (!strcmp(res->value, "l4-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L4_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L4_DST_ONLY;
 	else if (!strcmp(res->value, "l2-src-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_SRC_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_SRC_ONLY;
 	else if (!strcmp(res->value, "l2-dst-only"))
-		rss_conf.rss_hf = ETH_RSS_L2_DST_ONLY;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2_DST_ONLY;
 	else if (!strcmp(res->value, "l2tpv3"))
-		rss_conf.rss_hf = ETH_RSS_L2TPV3;
+		rss_conf.rss_hf = RTE_ETH_RSS_L2TPV3;
 	else if (!strcmp(res->value, "esp"))
-		rss_conf.rss_hf = ETH_RSS_ESP;
+		rss_conf.rss_hf = RTE_ETH_RSS_ESP;
 	else if (!strcmp(res->value, "ah"))
-		rss_conf.rss_hf = ETH_RSS_AH;
+		rss_conf.rss_hf = RTE_ETH_RSS_AH;
 	else if (!strcmp(res->value, "pfcp"))
-		rss_conf.rss_hf = ETH_RSS_PFCP;
+		rss_conf.rss_hf = RTE_ETH_RSS_PFCP;
 	else if (!strcmp(res->value, "pppoe"))
-		rss_conf.rss_hf = ETH_RSS_PPPOE;
+		rss_conf.rss_hf = RTE_ETH_RSS_PPPOE;
 	else if (!strcmp(res->value, "gtpu"))
-		rss_conf.rss_hf = ETH_RSS_GTPU;
+		rss_conf.rss_hf = RTE_ETH_RSS_GTPU;
 	else if (!strcmp(res->value, "ecpri"))
-		rss_conf.rss_hf = ETH_RSS_ECPRI;
+		rss_conf.rss_hf = RTE_ETH_RSS_ECPRI;
 	else if (!strcmp(res->value, "mpls"))
-		rss_conf.rss_hf = ETH_RSS_MPLS;
+		rss_conf.rss_hf = RTE_ETH_RSS_MPLS;
 	else if (!strcmp(res->value, "none"))
 		rss_conf.rss_hf = 0;
 	else if (!strcmp(res->value, "level-default")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_PMD_DEFAULT);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_PMD_DEFAULT);
 	} else if (!strcmp(res->value, "level-outer")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_OUTERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_OUTERMOST);
 	} else if (!strcmp(res->value, "level-inner")) {
-		rss_hf &= (~ETH_RSS_LEVEL_MASK);
-		rss_conf.rss_hf = (rss_hf | ETH_RSS_LEVEL_INNERMOST);
+		rss_hf &= (~RTE_ETH_RSS_LEVEL_MASK);
+		rss_conf.rss_hf = (rss_hf | RTE_ETH_RSS_LEVEL_INNERMOST);
 	} else if (!strcmp(res->value, "default"))
 		use_default = 1;
 	else if (isdigit(res->value[0]) && atoi(res->value) > 0 &&
@@ -2999,8 +2999,8 @@ parse_reta_config(const char *str,
 			return -1;
 		}
 
-		idx = hash_index / RTE_RETA_GROUP_SIZE;
-		shift = hash_index % RTE_RETA_GROUP_SIZE;
+		idx = hash_index / RTE_ETH_RETA_GROUP_SIZE;
+		shift = hash_index % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[idx].mask |= (1ULL << shift);
 		reta_conf[idx].reta[shift] = nb_queue;
 	}
@@ -3029,10 +3029,10 @@ cmd_set_rss_reta_parsed(void *parsed_result,
 	} else
 		printf("The reta size of port %d is %u\n",
 			res->port_id, dev_info.reta_size);
-	if (dev_info.reta_size > ETH_RSS_RETA_SIZE_512) {
+	if (dev_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		fprintf(stderr,
 			"Currently do not support more than %u entries of redirection table\n",
-			ETH_RSS_RETA_SIZE_512);
+			RTE_ETH_RSS_RETA_SIZE_512);
 		return;
 	}
 
@@ -3103,8 +3103,8 @@ showport_parse_reta_config(struct rte_eth_rss_reta_entry64 *conf,
 	char *end;
 	char *str_fld[8];
 	uint16_t i;
-	uint16_t num = (nb_entries + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	uint16_t num = (nb_entries + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 	int ret;
 
 	p = strchr(p0, '(');
@@ -3149,7 +3149,7 @@ cmd_showport_reta_parsed(void *parsed_result,
 	if (ret != 0)
 		return;
 
-	max_reta_size = RTE_MIN(dev_info.reta_size, ETH_RSS_RETA_SIZE_512);
+	max_reta_size = RTE_MIN(dev_info.reta_size, RTE_ETH_RSS_RETA_SIZE_512);
 	if (res->size == 0 || res->size > max_reta_size) {
 		fprintf(stderr, "Invalid redirection table size: %u (1-%u)\n",
 			res->size, max_reta_size);
@@ -3289,7 +3289,7 @@ cmd_config_dcb_parsed(void *parsed_result,
 		return;
 	}
 
-	if ((res->num_tcs != ETH_4_TCS) && (res->num_tcs != ETH_8_TCS)) {
+	if ((res->num_tcs != RTE_ETH_4_TCS) && (res->num_tcs != RTE_ETH_8_TCS)) {
 		fprintf(stderr,
 			"The invalid number of traffic class, only 4 or 8 allowed.\n");
 		return;
@@ -4293,9 +4293,9 @@ cmd_vlan_tpid_parsed(void *parsed_result,
 	enum rte_vlan_type vlan_type;
 
 	if (!strcmp(res->vlan_type, "inner"))
-		vlan_type = ETH_VLAN_TYPE_INNER;
+		vlan_type = RTE_ETH_VLAN_TYPE_INNER;
 	else if (!strcmp(res->vlan_type, "outer"))
-		vlan_type = ETH_VLAN_TYPE_OUTER;
+		vlan_type = RTE_ETH_VLAN_TYPE_OUTER;
 	else {
 		fprintf(stderr, "Unknown vlan type\n");
 		return;
@@ -4632,55 +4632,55 @@ csum_show(int port_id)
 	printf("Parse tunnel is %s\n",
 		(ports[port_id].parse_tunnel) ? "on" : "off");
 	printf("IP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) ? "hw" : "sw");
 	printf("UDP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ? "hw" : "sw");
 	printf("TCP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ? "hw" : "sw");
 	printf("SCTP checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) ? "hw" : "sw");
 	printf("Outer-Ip checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ? "hw" : "sw");
 	printf("Outer-Udp checksum offload is %s\n",
-		(tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ? "hw" : "sw");
 
 	/* display warnings if configuration is not supported by the NIC */
 	ret = eth_dev_info_get_print_err(port_id, &dev_info);
 	if (ret != 0)
 		return;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware UDP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware TCP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_SCTP_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware SCTP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) == 0) {
 		fprintf(stderr,
 			"Warning: hardware outer IP checksum enabled but not supported by port %d\n",
 			port_id);
 	}
-	if ((tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) &&
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 			== 0) {
 		fprintf(stderr,
 			"Warning: hardware outer UDP checksum enabled but not supported by port %d\n",
@@ -4730,8 +4730,8 @@ cmd_csum_parsed(void *parsed_result,
 
 		if (!strcmp(res->proto, "ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_IPV4_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"IP checksum offload is not supported by port %u\n",
@@ -4739,8 +4739,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_UDP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"UDP checksum offload is not supported by port %u\n",
@@ -4748,8 +4748,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "tcp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_TCP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_TCP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"TCP checksum offload is not supported by port %u\n",
@@ -4757,8 +4757,8 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "sctp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-						DEV_TX_OFFLOAD_SCTP_CKSUM)) {
-				csum_offloads |= DEV_TX_OFFLOAD_SCTP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) {
+				csum_offloads |= RTE_ETH_TX_OFFLOAD_SCTP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"SCTP checksum offload is not supported by port %u\n",
@@ -4766,9 +4766,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-ip")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer IP checksum offload is not supported by port %u\n",
@@ -4776,9 +4776,9 @@ cmd_csum_parsed(void *parsed_result,
 			}
 		} else if (!strcmp(res->proto, "outer-udp")) {
 			if (hw == 0 || (dev_info.tx_offload_capa &
-					DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+					RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				csum_offloads |=
-						DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+						RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 			} else {
 				fprintf(stderr,
 					"Outer UDP checksum offload is not supported by port %u\n",
@@ -4933,7 +4933,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr, "Error: TSO is not supported by port %d\n",
 			res->port_id);
 		return;
@@ -4941,11 +4941,11 @@ cmd_tso_set_parsed(void *parsed_result,
 
 	if (ports[res->port_id].tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_TCP_TSO;
+						~RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO for non-tunneled packets is disabled\n");
 	} else {
 		ports[res->port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_TCP_TSO;
+						RTE_ETH_TX_OFFLOAD_TCP_TSO;
 		printf("TSO segment size for non-tunneled packets is %d\n",
 			ports[res->port_id].tso_segsz);
 	}
@@ -4957,7 +4957,7 @@ cmd_tso_set_parsed(void *parsed_result,
 		return;
 
 	if ((ports[res->port_id].tso_segsz != 0) &&
-		(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_TCP_TSO) == 0) {
+		(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_TCP_TSO) == 0) {
 		fprintf(stderr,
 			"Warning: TSO enabled but not supported by port %d\n",
 			res->port_id);
@@ -5028,27 +5028,27 @@ check_tunnel_tso_nic_support(portid_t port_id)
 	if (eth_dev_info_get_print_err(port_id, &dev_info) != 0)
 		return dev_info;
 
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VXLAN_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO))
 		fprintf(stderr,
 			"Warning: VXLAN TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GRE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPIP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IPIP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		fprintf(stderr,
 			"Warning: GENEVE TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: IP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
-	if (!(dev_info.tx_offload_capa & DEV_TX_OFFLOAD_UDP_TNL_TSO))
+	if (!(dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO))
 		fprintf(stderr,
 			"Warning: UDP TUNNEL TSO not supported therefore not enabled for port %d\n",
 			port_id);
@@ -5076,20 +5076,20 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 	dev_info = check_tunnel_tso_nic_support(res->port_id);
 	if (ports[res->port_id].tunnel_tso_segsz == 0) {
 		ports[res->port_id].dev_conf.txmode.offloads &=
-			~(DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_IP_TNL_TSO |
-			  DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			~(RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 		printf("TSO for tunneled packets is disabled\n");
 	} else {
-		uint64_t tso_offloads = (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-					 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					 DEV_TX_OFFLOAD_IP_TNL_TSO |
-					 DEV_TX_OFFLOAD_UDP_TNL_TSO);
+		uint64_t tso_offloads = (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					 RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 
 		ports[res->port_id].dev_conf.txmode.offloads |=
 			(tso_offloads & dev_info.tx_offload_capa);
@@ -5112,7 +5112,7 @@ cmd_tunnel_tso_set_parsed(void *parsed_result,
 			fprintf(stderr,
 				"Warning: csum parse_tunnel must be set so that tunneled packets are recognized\n");
 		if (!(ports[res->port_id].dev_conf.txmode.offloads &
-		      DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+		      RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 			fprintf(stderr,
 				"Warning: csum set outer-ip must be set to hw if outer L3 is IPv4; not necessary for IPv6\n");
 	}
@@ -7058,9 +7058,9 @@ cmd_link_flow_ctrl_show_parsed(void *parsed_result,
 		return;
 	}
 
-	if (fc_conf.mode == RTE_FC_RX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_RX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		rx_fc_en = true;
-	if (fc_conf.mode == RTE_FC_TX_PAUSE || fc_conf.mode == RTE_FC_FULL)
+	if (fc_conf.mode == RTE_ETH_FC_TX_PAUSE || fc_conf.mode == RTE_ETH_FC_FULL)
 		tx_fc_en = true;
 
 	printf("\n%s Flow control infos for port %-2d %s\n",
@@ -7338,12 +7338,12 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_lfc_mode[2][2] = {
-			{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+			{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	/* Partial command line, retrieve current configuration */
@@ -7356,11 +7356,11 @@ cmd_link_flow_ctrl_set_parsed(void *parsed_result,
 			return;
 		}
 
-		if ((fc_conf.mode == RTE_FC_RX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_RX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			rx_fc_en = 1;
-		if ((fc_conf.mode == RTE_FC_TX_PAUSE) ||
-		    (fc_conf.mode == RTE_FC_FULL))
+		if ((fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ||
+		    (fc_conf.mode == RTE_ETH_FC_FULL))
 			tx_fc_en = 1;
 	}
 
@@ -7428,12 +7428,12 @@ cmd_priority_flow_ctrl_set_parsed(void *parsed_result,
 
 	/*
 	 * Rx on/off, flow control is enabled/disabled on RX side. This can indicate
-	 * the RTE_FC_TX_PAUSE, Transmit pause frame at the Rx side.
+	 * the RTE_ETH_FC_TX_PAUSE, Transmit pause frame at the Rx side.
 	 * Tx on/off, flow control is enabled/disabled on TX side. This can indicate
-	 * the RTE_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
+	 * the RTE_ETH_FC_RX_PAUSE, Respond to the pause frame at the Tx side.
 	 */
 	static enum rte_eth_fc_mode rx_tx_onoff_2_pfc_mode[2][2] = {
-		{RTE_FC_NONE, RTE_FC_TX_PAUSE}, {RTE_FC_RX_PAUSE, RTE_FC_FULL}
+		{RTE_ETH_FC_NONE, RTE_ETH_FC_TX_PAUSE}, {RTE_ETH_FC_RX_PAUSE, RTE_ETH_FC_FULL}
 	};
 
 	memset(&pfc_conf, 0, sizeof(struct rte_eth_pfc_conf));
@@ -8950,13 +8950,13 @@ cmd_set_vf_rxmode_parsed(void *parsed_result,
 	int is_on = (strcmp(res->on, "on") == 0) ? 1 : 0;
 	if (!strcmp(res->what,"rxmode")) {
 		if (!strcmp(res->mode, "AUPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_UNTAG;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_UNTAG;
 		else if (!strcmp(res->mode, "ROPE"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_HASH_UC;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_HASH_UC;
 		else if (!strcmp(res->mode, "BAM"))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_BROADCAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_BROADCAST;
 		else if (!strncmp(res->mode, "MPE",3))
-			vf_rxmode |= ETH_VMDQ_ACCEPT_MULTICAST;
+			vf_rxmode |= RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 	}
 
 	RTE_SET_USED(is_on);
@@ -9356,7 +9356,7 @@ cmd_tunnel_udp_config_parsed(void *parsed_result,
 	int ret;
 
 	tunnel_udp.udp_port = res->udp_port;
-	tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+	tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 
 	if (!strcmp(res->what, "add"))
 		ret = rte_eth_dev_udp_tunnel_port_add(res->port_id,
@@ -9422,13 +9422,13 @@ cmd_cfg_tunnel_udp_port_parsed(void *parsed_result,
 	tunnel_udp.udp_port = res->udp_port;
 
 	if (!strcmp(res->tunnel_type, "vxlan")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN;
 	} else if (!strcmp(res->tunnel_type, "geneve")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_GENEVE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_GENEVE;
 	} else if (!strcmp(res->tunnel_type, "vxlan-gpe")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_VXLAN_GPE;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_VXLAN_GPE;
 	} else if (!strcmp(res->tunnel_type, "ecpri")) {
-		tunnel_udp.prot_type = RTE_TUNNEL_TYPE_ECPRI;
+		tunnel_udp.prot_type = RTE_ETH_TUNNEL_TYPE_ECPRI;
 	} else {
 		fprintf(stderr, "Invalid tunnel type\n");
 		return;
@@ -9543,20 +9543,20 @@ cmd_set_mirror_mask_parsed(void *parsed_result,
 
 	memset(&mr_conf, 0, sizeof(struct rte_eth_mirror_conf));
 
-	unsigned int vlan_list[ETH_MIRROR_MAX_VLANS];
+	unsigned int vlan_list[RTE_ETH_MIRROR_MAX_VLANS];
 
 	mr_conf.dst_pool = res->dstpool_id;
 
 	if (!strcmp(res->what, "pool-mirror-up")) {
 		mr_conf.pool_mask = strtoull(res->value, NULL, 16);
-		mr_conf.rule_type = ETH_MIRROR_VIRTUAL_POOL_UP;
+		mr_conf.rule_type = RTE_ETH_MIRROR_VIRTUAL_POOL_UP;
 	} else if (!strcmp(res->what, "pool-mirror-down")) {
 		mr_conf.pool_mask = strtoull(res->value, NULL, 16);
-		mr_conf.rule_type = ETH_MIRROR_VIRTUAL_POOL_DOWN;
+		mr_conf.rule_type = RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN;
 	} else if (!strcmp(res->what, "vlan-mirror")) {
-		mr_conf.rule_type = ETH_MIRROR_VLAN;
+		mr_conf.rule_type = RTE_ETH_MIRROR_VLAN;
 		nb_item = parse_item_list(res->value, "vlan",
-				ETH_MIRROR_MAX_VLANS, vlan_list, 1);
+				RTE_ETH_MIRROR_MAX_VLANS, vlan_list, 1);
 		if (nb_item <= 0)
 			return;
 
@@ -9656,9 +9656,9 @@ cmd_set_mirror_link_parsed(void *parsed_result,
 
 	memset(&mr_conf, 0, sizeof(struct rte_eth_mirror_conf));
 	if (!strcmp(res->what, "uplink-mirror"))
-		mr_conf.rule_type = ETH_MIRROR_UPLINK_PORT;
+		mr_conf.rule_type = RTE_ETH_MIRROR_UPLINK_PORT;
 	else
-		mr_conf.rule_type = ETH_MIRROR_DOWNLINK_PORT;
+		mr_conf.rule_type = RTE_ETH_MIRROR_DOWNLINK_PORT;
 
 	mr_conf.dst_pool = res->dstpool_id;
 
@@ -11823,7 +11823,7 @@ cmd_set_macsec_offload_on_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_enable(port_id, en, rp);
 #endif
@@ -11834,7 +11834,7 @@ cmd_set_macsec_offload_on_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MACSEC_INSERT;
+						RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
@@ -11920,7 +11920,7 @@ cmd_set_macsec_offload_off_parsed(
 	if (ret != 0)
 		return;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MACSEC_INSERT) {
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) {
 #ifdef RTE_NET_IXGBE
 		ret = rte_pmd_ixgbe_macsec_disable(port_id);
 #endif
@@ -11928,7 +11928,7 @@ cmd_set_macsec_offload_off_parsed(
 	switch (ret) {
 	case 0:
 		ports[port_id].dev_conf.txmode.offloads &=
-						~DEV_TX_OFFLOAD_MACSEC_INSERT;
+						~RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 		cmd_reconfig_device_queue(port_id, 1, 1);
 		break;
 	case -ENODEV:
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 31d8ba1b913c..e9520e045aa0 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -86,60 +86,60 @@ static const struct {
 };
 
 const struct rss_type_info rss_type_table[] = {
-	{ "all", ETH_RSS_ETH | ETH_RSS_VLAN | ETH_RSS_IP | ETH_RSS_TCP |
-		ETH_RSS_UDP | ETH_RSS_SCTP | ETH_RSS_L2_PAYLOAD |
-		ETH_RSS_L2TPV3 | ETH_RSS_ESP | ETH_RSS_AH | ETH_RSS_PFCP |
-		ETH_RSS_GTPU | ETH_RSS_ECPRI | ETH_RSS_MPLS},
+	{ "all", RTE_ETH_RSS_ETH | RTE_ETH_RSS_VLAN | RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP |
+		RTE_ETH_RSS_UDP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_L2_PAYLOAD |
+		RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_ESP | RTE_ETH_RSS_AH | RTE_ETH_RSS_PFCP |
+		RTE_ETH_RSS_GTPU | RTE_ETH_RSS_ECPRI | RTE_ETH_RSS_MPLS},
 	{ "none", 0 },
-	{ "eth", ETH_RSS_ETH },
-	{ "l2-src-only", ETH_RSS_L2_SRC_ONLY },
-	{ "l2-dst-only", ETH_RSS_L2_DST_ONLY },
-	{ "vlan", ETH_RSS_VLAN },
-	{ "s-vlan", ETH_RSS_S_VLAN },
-	{ "c-vlan", ETH_RSS_C_VLAN },
-	{ "ipv4", ETH_RSS_IPV4 },
-	{ "ipv4-frag", ETH_RSS_FRAG_IPV4 },
-	{ "ipv4-tcp", ETH_RSS_NONFRAG_IPV4_TCP },
-	{ "ipv4-udp", ETH_RSS_NONFRAG_IPV4_UDP },
-	{ "ipv4-sctp", ETH_RSS_NONFRAG_IPV4_SCTP },
-	{ "ipv4-other", ETH_RSS_NONFRAG_IPV4_OTHER },
-	{ "ipv6", ETH_RSS_IPV6 },
-	{ "ipv6-frag", ETH_RSS_FRAG_IPV6 },
-	{ "ipv6-tcp", ETH_RSS_NONFRAG_IPV6_TCP },
-	{ "ipv6-udp", ETH_RSS_NONFRAG_IPV6_UDP },
-	{ "ipv6-sctp", ETH_RSS_NONFRAG_IPV6_SCTP },
-	{ "ipv6-other", ETH_RSS_NONFRAG_IPV6_OTHER },
-	{ "l2-payload", ETH_RSS_L2_PAYLOAD },
-	{ "ipv6-ex", ETH_RSS_IPV6_EX },
-	{ "ipv6-tcp-ex", ETH_RSS_IPV6_TCP_EX },
-	{ "ipv6-udp-ex", ETH_RSS_IPV6_UDP_EX },
-	{ "port", ETH_RSS_PORT },
-	{ "vxlan", ETH_RSS_VXLAN },
-	{ "geneve", ETH_RSS_GENEVE },
-	{ "nvgre", ETH_RSS_NVGRE },
-	{ "ip", ETH_RSS_IP },
-	{ "udp", ETH_RSS_UDP },
-	{ "tcp", ETH_RSS_TCP },
-	{ "sctp", ETH_RSS_SCTP },
-	{ "tunnel", ETH_RSS_TUNNEL },
+	{ "eth", RTE_ETH_RSS_ETH },
+	{ "l2-src-only", RTE_ETH_RSS_L2_SRC_ONLY },
+	{ "l2-dst-only", RTE_ETH_RSS_L2_DST_ONLY },
+	{ "vlan", RTE_ETH_RSS_VLAN },
+	{ "s-vlan", RTE_ETH_RSS_S_VLAN },
+	{ "c-vlan", RTE_ETH_RSS_C_VLAN },
+	{ "ipv4", RTE_ETH_RSS_IPV4 },
+	{ "ipv4-frag", RTE_ETH_RSS_FRAG_IPV4 },
+	{ "ipv4-tcp", RTE_ETH_RSS_NONFRAG_IPV4_TCP },
+	{ "ipv4-udp", RTE_ETH_RSS_NONFRAG_IPV4_UDP },
+	{ "ipv4-sctp", RTE_ETH_RSS_NONFRAG_IPV4_SCTP },
+	{ "ipv4-other", RTE_ETH_RSS_NONFRAG_IPV4_OTHER },
+	{ "ipv6", RTE_ETH_RSS_IPV6 },
+	{ "ipv6-frag", RTE_ETH_RSS_FRAG_IPV6 },
+	{ "ipv6-tcp", RTE_ETH_RSS_NONFRAG_IPV6_TCP },
+	{ "ipv6-udp", RTE_ETH_RSS_NONFRAG_IPV6_UDP },
+	{ "ipv6-sctp", RTE_ETH_RSS_NONFRAG_IPV6_SCTP },
+	{ "ipv6-other", RTE_ETH_RSS_NONFRAG_IPV6_OTHER },
+	{ "l2-payload", RTE_ETH_RSS_L2_PAYLOAD },
+	{ "ipv6-ex", RTE_ETH_RSS_IPV6_EX },
+	{ "ipv6-tcp-ex", RTE_ETH_RSS_IPV6_TCP_EX },
+	{ "ipv6-udp-ex", RTE_ETH_RSS_IPV6_UDP_EX },
+	{ "port", RTE_ETH_RSS_PORT },
+	{ "vxlan", RTE_ETH_RSS_VXLAN },
+	{ "geneve", RTE_ETH_RSS_GENEVE },
+	{ "nvgre", RTE_ETH_RSS_NVGRE },
+	{ "ip", RTE_ETH_RSS_IP },
+	{ "udp", RTE_ETH_RSS_UDP },
+	{ "tcp", RTE_ETH_RSS_TCP },
+	{ "sctp", RTE_ETH_RSS_SCTP },
+	{ "tunnel", RTE_ETH_RSS_TUNNEL },
 	{ "l3-pre32", RTE_ETH_RSS_L3_PRE32 },
 	{ "l3-pre40", RTE_ETH_RSS_L3_PRE40 },
 	{ "l3-pre48", RTE_ETH_RSS_L3_PRE48 },
 	{ "l3-pre56", RTE_ETH_RSS_L3_PRE56 },
 	{ "l3-pre64", RTE_ETH_RSS_L3_PRE64 },
 	{ "l3-pre96", RTE_ETH_RSS_L3_PRE96 },
-	{ "l3-src-only", ETH_RSS_L3_SRC_ONLY },
-	{ "l3-dst-only", ETH_RSS_L3_DST_ONLY },
-	{ "l4-src-only", ETH_RSS_L4_SRC_ONLY },
-	{ "l4-dst-only", ETH_RSS_L4_DST_ONLY },
-	{ "esp", ETH_RSS_ESP },
-	{ "ah", ETH_RSS_AH },
-	{ "l2tpv3", ETH_RSS_L2TPV3 },
-	{ "pfcp", ETH_RSS_PFCP },
-	{ "pppoe", ETH_RSS_PPPOE },
-	{ "gtpu", ETH_RSS_GTPU },
-	{ "ecpri", ETH_RSS_ECPRI },
-	{ "mpls", ETH_RSS_MPLS },
+	{ "l3-src-only", RTE_ETH_RSS_L3_SRC_ONLY },
+	{ "l3-dst-only", RTE_ETH_RSS_L3_DST_ONLY },
+	{ "l4-src-only", RTE_ETH_RSS_L4_SRC_ONLY },
+	{ "l4-dst-only", RTE_ETH_RSS_L4_DST_ONLY },
+	{ "esp", RTE_ETH_RSS_ESP },
+	{ "ah", RTE_ETH_RSS_AH },
+	{ "l2tpv3", RTE_ETH_RSS_L2TPV3 },
+	{ "pfcp", RTE_ETH_RSS_PFCP },
+	{ "pppoe", RTE_ETH_RSS_PPPOE },
+	{ "gtpu", RTE_ETH_RSS_GTPU },
+	{ "ecpri", RTE_ETH_RSS_ECPRI },
+	{ "mpls", RTE_ETH_RSS_MPLS },
 	{ NULL, 0 },
 };
 
@@ -474,39 +474,39 @@ static void
 device_infos_display_speeds(uint32_t speed_capa)
 {
 	printf("\n\tDevice speed capability:");
-	if (speed_capa == ETH_LINK_SPEED_AUTONEG)
+	if (speed_capa == RTE_ETH_LINK_SPEED_AUTONEG)
 		printf(" Autonegotiate (all speeds)");
-	if (speed_capa & ETH_LINK_SPEED_FIXED)
+	if (speed_capa & RTE_ETH_LINK_SPEED_FIXED)
 		printf(" Disable autonegotiate (fixed speed)  ");
-	if (speed_capa & ETH_LINK_SPEED_10M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M_HD)
 		printf(" 10 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_10M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10M)
 		printf(" 10 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M_HD)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M_HD)
 		printf(" 100 Mbps half-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_100M)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100M)
 		printf(" 100 Mbps full-duplex  ");
-	if (speed_capa & ETH_LINK_SPEED_1G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_1G)
 		printf(" 1 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_2_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_2_5G)
 		printf(" 2.5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_5G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_5G)
 		printf(" 5 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_10G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_10G)
 		printf(" 10 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_20G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_20G)
 		printf(" 20 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_25G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_25G)
 		printf(" 25 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_40G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_40G)
 		printf(" 40 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_50G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_50G)
 		printf(" 50 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_56G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_56G)
 		printf(" 56 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_100G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_100G)
 		printf(" 100 Gbps  ");
-	if (speed_capa & ETH_LINK_SPEED_200G)
+	if (speed_capa & RTE_ETH_LINK_SPEED_200G)
 		printf(" 200 Gbps  ");
 }
 
@@ -636,9 +636,9 @@ port_infos_display(portid_t port_id)
 
 	printf("\nLink status: %s\n", (link.link_status) ? ("up") : ("down"));
 	printf("Link speed: %s\n", rte_eth_link_speed_to_str(link.link_speed));
-	printf("Link duplex: %s\n", (link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+	printf("Link duplex: %s\n", (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 	       ("full-duplex") : ("half-duplex"));
-	printf("Autoneg status: %s\n", (link.link_autoneg == ETH_LINK_AUTONEG) ?
+	printf("Autoneg status: %s\n", (link.link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 	       ("On") : ("Off"));
 
 	if (!rte_eth_dev_get_mtu(port_id, &mtu))
@@ -656,22 +656,22 @@ port_infos_display(portid_t port_id)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 	if (vlan_offload >= 0){
 		printf("VLAN offload: \n");
-		if (vlan_offload & ETH_VLAN_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_STRIP_OFFLOAD)
 			printf("  strip on, ");
 		else
 			printf("  strip off, ");
 
-		if (vlan_offload & ETH_VLAN_FILTER_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_FILTER_OFFLOAD)
 			printf("filter on, ");
 		else
 			printf("filter off, ");
 
-		if (vlan_offload & ETH_VLAN_EXTEND_OFFLOAD)
+		if (vlan_offload & RTE_ETH_VLAN_EXTEND_OFFLOAD)
 			printf("extend on, ");
 		else
 			printf("extend off, ");
 
-		if (vlan_offload & ETH_QINQ_STRIP_OFFLOAD)
+		if (vlan_offload & RTE_ETH_QINQ_STRIP_OFFLOAD)
 			printf("qinq strip on\n");
 		else
 			printf("qinq strip off\n");
@@ -1166,7 +1166,7 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
 	diag = rte_eth_dev_set_mtu(port_id, mtu);
 	if (diag)
 		fprintf(stderr, "Set MTU failed. diag=%d\n", diag);
-	else if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	else if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		/*
 		 * Ether overhead in driver is equal to the difference of
 		 * max_rx_pktlen and max_mtu in rte_eth_dev_info when the
@@ -1175,12 +1175,12 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
 		eth_overhead = dev_info.max_rx_pktlen - dev_info.max_mtu;
 		if (mtu > RTE_ETHER_MTU) {
 			rte_port->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			rte_port->dev_conf.rxmode.max_rx_pkt_len =
 						mtu + eth_overhead;
 		} else
 			rte_port->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	}
 }
 
@@ -2767,8 +2767,8 @@ port_rss_reta_info(portid_t port_id,
 	}
 
 	for (i = 0; i < nb_entries; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 		printf("RSS RETA configuration: hash index=%u, queue=%u\n",
@@ -3118,7 +3118,7 @@ dcb_fwd_config_setup(void)
 	for (lc_id = 0; lc_id < cur_fwd_config.nb_fwd_lcores; lc_id++) {
 		fwd_lcores[lc_id]->stream_nb = 0;
 		fwd_lcores[lc_id]->stream_idx = sm_id;
-		for (i = 0; i < ETH_MAX_VMDQ_POOL; i++) {
+		for (i = 0; i < RTE_ETH_MAX_VMDQ_POOL; i++) {
 			/* if the nb_queue is zero, means this tc is
 			 * not enabled on the POOL
 			 */
@@ -4181,11 +4181,11 @@ vlan_extend_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	} else {
-		vlan_offload &= ~ETH_VLAN_EXTEND_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
+		vlan_offload &= ~RTE_ETH_VLAN_EXTEND_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4211,11 +4211,11 @@ rx_vlan_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
-		vlan_offload &= ~ETH_VLAN_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		vlan_offload &= ~RTE_ETH_VLAN_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4256,11 +4256,11 @@ rx_vlan_filter_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload |= RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	} else {
-		vlan_offload &= ~ETH_VLAN_FILTER_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+		vlan_offload &= ~RTE_ETH_VLAN_FILTER_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4286,11 +4286,11 @@ rx_vlan_qinq_strip_set(portid_t port_id, int on)
 	vlan_offload = rte_eth_dev_get_vlan_offload(port_id);
 
 	if (on) {
-		vlan_offload |= ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload |= RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	} else {
-		vlan_offload &= ~ETH_QINQ_STRIP_OFFLOAD;
-		port_rx_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+		vlan_offload &= ~RTE_ETH_QINQ_STRIP_OFFLOAD;
+		port_rx_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 	}
 
 	diag = rte_eth_dev_set_vlan_offload(port_id, vlan_offload);
@@ -4360,7 +4360,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 		return;
 
 	if (ports[port_id].dev_conf.txmode.offloads &
-	    DEV_TX_OFFLOAD_QINQ_INSERT) {
+	    RTE_ETH_TX_OFFLOAD_QINQ_INSERT) {
 		fprintf(stderr, "Error, as QinQ has been enabled.\n");
 		return;
 	}
@@ -4369,7 +4369,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_VLAN_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: vlan insert is not supported by port %d\n",
 			port_id);
@@ -4377,7 +4377,7 @@ tx_vlan_set(portid_t port_id, uint16_t vlan_id)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_VLAN_INSERT;
+	ports[port_id].dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	ports[port_id].tx_vlan_id = vlan_id;
 }
 
@@ -4396,7 +4396,7 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	if (ret != 0)
 		return;
 
-	if ((dev_info.tx_offload_capa & DEV_TX_OFFLOAD_QINQ_INSERT) == 0) {
+	if ((dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) == 0) {
 		fprintf(stderr,
 			"Error: qinq insert not supported by port %d\n",
 			port_id);
@@ -4404,8 +4404,8 @@ tx_qinq_set(portid_t port_id, uint16_t vlan_id, uint16_t vlan_id_outer)
 	}
 
 	tx_vlan_reset(port_id);
-	ports[port_id].dev_conf.txmode.offloads |= (DEV_TX_OFFLOAD_VLAN_INSERT |
-						    DEV_TX_OFFLOAD_QINQ_INSERT);
+	ports[port_id].dev_conf.txmode.offloads |= (RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+						    RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = vlan_id;
 	ports[port_id].tx_vlan_id_outer = vlan_id_outer;
 }
@@ -4414,8 +4414,8 @@ void
 tx_vlan_reset(portid_t port_id)
 {
 	ports[port_id].dev_conf.txmode.offloads &=
-				~(DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_QINQ_INSERT);
+				~(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_QINQ_INSERT);
 	ports[port_id].tx_vlan_id = 0;
 	ports[port_id].tx_vlan_id_outer = 0;
 }
@@ -4821,7 +4821,7 @@ set_queue_rate_limit(portid_t port_id, uint16_t queue_idx, uint16_t rate)
 	ret = eth_link_get_nowait_print_err(port_id, &link);
 	if (ret < 0)
 		return 1;
-	if (link.link_speed != ETH_SPEED_NUM_UNKNOWN &&
+	if (link.link_speed != RTE_ETH_SPEED_NUM_UNKNOWN &&
 	    rate > link.link_speed) {
 		fprintf(stderr,
 			"Invalid rate value:%u bigger than link speed: %u\n",
diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c
index 38cc256533b6..454a2d41c366 100644
--- a/app/test-pmd/csumonly.c
+++ b/app/test-pmd/csumonly.c
@@ -485,7 +485,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		if (info->l4_proto == IPPROTO_TCP && tso_segsz) {
 			ol_flags |= PKT_TX_IP_CKSUM;
 		} else {
-			if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 				ol_flags |= PKT_TX_IP_CKSUM;
 			} else {
 				ipv4_hdr->hdr_checksum = 0;
@@ -502,7 +502,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		udp_hdr = (struct rte_udp_hdr *)((char *)l3_hdr + info->l3_len);
 		/* do not recalculate udp cksum if it was 0 */
 		if (udp_hdr->dgram_cksum != 0) {
-			if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+			if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 				ol_flags |= PKT_TX_UDP_CKSUM;
 			} else {
 				udp_hdr->dgram_cksum = 0;
@@ -517,7 +517,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 		tcp_hdr = (struct rte_tcp_hdr *)((char *)l3_hdr + info->l3_len);
 		if (tso_segsz)
 			ol_flags |= PKT_TX_TCP_SEG;
-		else if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+		else if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 			ol_flags |= PKT_TX_TCP_CKSUM;
 		} else {
 			tcp_hdr->cksum = 0;
@@ -532,7 +532,7 @@ process_inner_cksums(void *l3_hdr, const struct testpmd_offload_info *info,
 			((char *)l3_hdr + info->l3_len);
 		/* sctp payload must be a multiple of 4 to be
 		 * offloaded */
-		if ((tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
+		if ((tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
 			((ipv4_hdr->total_length & 0x3) == 0)) {
 			ol_flags |= PKT_TX_SCTP_CKSUM;
 		} else {
@@ -559,7 +559,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ipv4_hdr->hdr_checksum = 0;
 		ol_flags |= PKT_TX_OUTER_IPV4;
 
-		if (tx_offloads	& DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (tx_offloads	& RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			ol_flags |= PKT_TX_OUTER_IP_CKSUM;
 		else
 			ipv4_hdr->hdr_checksum = rte_ipv4_cksum(ipv4_hdr);
@@ -576,7 +576,7 @@ process_outer_cksums(void *outer_l3_hdr, struct testpmd_offload_info *info,
 		ol_flags |= PKT_TX_TCP_SEG;
 
 	/* Skip SW outer UDP checksum generation if HW supports it */
-	if (tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) {
 		if (info->outer_ethertype == _htons(RTE_ETHER_TYPE_IPV4))
 			udp_hdr->dgram_cksum
 				= rte_ipv4_phdr_cksum(ipv4_hdr, ol_flags);
@@ -959,9 +959,9 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 		if (info.is_tunnel == 1) {
 			if (info.tunnel_tso_segsz ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+			     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 			    (tx_offloads &
-			     DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
+			     RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)) {
 				m->outer_l2_len = info.outer_l2_len;
 				m->outer_l3_len = info.outer_l3_len;
 				m->l2_len = info.l2_len;
@@ -1022,19 +1022,19 @@ pkt_burst_checksum_forward(struct fwd_stream *fs)
 					rte_be_to_cpu_16(info.outer_ethertype),
 					info.outer_l3_len);
 			/* dump tx packet info */
-			if ((tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					    DEV_TX_OFFLOAD_UDP_CKSUM |
-					    DEV_TX_OFFLOAD_TCP_CKSUM |
-					    DEV_TX_OFFLOAD_SCTP_CKSUM)) ||
+			if ((tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+					    RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)) ||
 				info.tso_segsz != 0)
 				printf("tx: m->l2_len=%d m->l3_len=%d "
 					"m->l4_len=%d\n",
 					m->l2_len, m->l3_len, m->l4_len);
 			if (info.is_tunnel == 1) {
 				if ((tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
 				    (tx_offloads &
-				    DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
+				    RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM) ||
 				    (tx_ol_flags & PKT_TX_OUTER_IPV6))
 					printf("tx: m->outer_l2_len=%d "
 						"m->outer_l3_len=%d\n",
diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c
index 9348618d0f8d..7d658d002cb6 100644
--- a/app/test-pmd/flowgen.c
+++ b/app/test-pmd/flowgen.c
@@ -100,11 +100,11 @@ pkt_burst_flow_gen(struct fwd_stream *fs)
 	vlan_tci_outer = ports[fs->tx_port].tx_vlan_id_outer;
 
 	tx_offloads = ports[fs->tx_port].dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags |= PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads	& DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	for (nb_pkt = 0; nb_pkt < nb_pkt_per_burst; nb_pkt++) {
diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c
index 0568ea794d48..1d878ba0a694 100644
--- a/app/test-pmd/macfwd.c
+++ b/app/test-pmd/macfwd.c
@@ -72,11 +72,11 @@ pkt_burst_mac_forward(struct fwd_stream *fs)
 	fs->rx_packets += nb_rx;
 	txp = &ports[fs->tx_port];
 	tx_offloads = txp->dev_conf.txmode.offloads;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 	for (i = 0; i < nb_rx; i++) {
 		if (likely(i < nb_rx - 1))
diff --git a/app/test-pmd/macswap_common.h b/app/test-pmd/macswap_common.h
index 7e9a3590a436..7ade9a686b7c 100644
--- a/app/test-pmd/macswap_common.h
+++ b/app/test-pmd/macswap_common.h
@@ -10,11 +10,11 @@ ol_flags_init(uint64_t tx_offload)
 {
 	uint64_t ol_flags = 0;
 
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_VLAN_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) ?
 			PKT_TX_VLAN : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_QINQ_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_QINQ_INSERT) ?
 			PKT_TX_QINQ : 0;
-	ol_flags |= (tx_offload & DEV_TX_OFFLOAD_MACSEC_INSERT) ?
+	ol_flags |= (tx_offload & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT) ?
 			PKT_TX_MACSEC : 0;
 
 	return ol_flags;
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 7c13210f04aa..1d0187723532 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -475,29 +475,29 @@ parse_event_printing_config(const char *optarg, int enable)
 static int
 parse_link_speed(int n)
 {
-	uint32_t speed = ETH_LINK_SPEED_FIXED;
+	uint32_t speed = RTE_ETH_LINK_SPEED_FIXED;
 
 	switch (n) {
 	case 1000:
-		speed |= ETH_LINK_SPEED_1G;
+		speed |= RTE_ETH_LINK_SPEED_1G;
 		break;
 	case 10000:
-		speed |= ETH_LINK_SPEED_10G;
+		speed |= RTE_ETH_LINK_SPEED_10G;
 		break;
 	case 25000:
-		speed |= ETH_LINK_SPEED_25G;
+		speed |= RTE_ETH_LINK_SPEED_25G;
 		break;
 	case 40000:
-		speed |= ETH_LINK_SPEED_40G;
+		speed |= RTE_ETH_LINK_SPEED_40G;
 		break;
 	case 50000:
-		speed |= ETH_LINK_SPEED_50G;
+		speed |= RTE_ETH_LINK_SPEED_50G;
 		break;
 	case 100000:
-		speed |= ETH_LINK_SPEED_100G;
+		speed |= RTE_ETH_LINK_SPEED_100G;
 		break;
 	case 200000:
-		speed |= ETH_LINK_SPEED_200G;
+		speed |= RTE_ETH_LINK_SPEED_200G;
 		break;
 	case 100:
 	case 10:
@@ -912,13 +912,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "pkt-filter-size")) {
 				if (!strcmp(optarg, "64K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_64K;
+						RTE_ETH_FDIR_PBALLOC_64K;
 				else if (!strcmp(optarg, "128K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_128K;
+						RTE_ETH_FDIR_PBALLOC_128K;
 				else if (!strcmp(optarg, "256K"))
 					fdir_conf.pballoc =
-						RTE_FDIR_PBALLOC_256K;
+						RTE_ETH_FDIR_PBALLOC_256K;
 				else
 					rte_exit(EXIT_FAILURE, "pkt-filter-size %s invalid -"
 						 " must be: 64K or 128K or 256K\n",
@@ -960,34 +960,34 @@ launch_args_parse(int argc, char** argv)
 			}
 #endif
 			if (!strcmp(lgopts[opt_idx].name, "disable-crc-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 			if (!strcmp(lgopts[opt_idx].name, "enable-lro"))
-				rx_offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 			if (!strcmp(lgopts[opt_idx].name, "enable-scatter"))
-				rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 			if (!strcmp(lgopts[opt_idx].name, "enable-rx-cksum"))
-				rx_offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-rx-timestamp"))
-				rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 			if (!strcmp(lgopts[opt_idx].name, "enable-hw-vlan"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-filter"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-vlan-extend"))
-				rx_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 			if (!strcmp(lgopts[opt_idx].name,
 					"enable-hw-qinq-strip"))
-				rx_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+				rx_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 
 			if (!strcmp(lgopts[opt_idx].name, "enable-drop-en"))
 				rx_drop_en = 1;
@@ -1009,13 +1009,13 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "forward-mode"))
 				set_pkt_forwarding_mode(optarg);
 			if (!strcmp(lgopts[opt_idx].name, "rss-ip"))
-				rss_hf = ETH_RSS_IP;
+				rss_hf = RTE_ETH_RSS_IP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-udp"))
-				rss_hf = ETH_RSS_UDP;
+				rss_hf = RTE_ETH_RSS_UDP;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-inner"))
-				rss_hf |= ETH_RSS_LEVEL_INNERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_INNERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rss-level-outer"))
-				rss_hf |= ETH_RSS_LEVEL_OUTERMOST;
+				rss_hf |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 			if (!strcmp(lgopts[opt_idx].name, "rxq")) {
 				n = atoi(optarg);
 				if (n >= 0 && check_nb_rxq((queueid_t)n) == 0)
@@ -1386,12 +1386,12 @@ launch_args_parse(int argc, char** argv)
 			if (!strcmp(lgopts[opt_idx].name, "rx-mq-mode")) {
 				char *end = NULL;
 				n = strtoul(optarg, &end, 16);
-				if (n >= 0 && n <= ETH_MQ_RX_VMDQ_DCB_RSS)
+				if (n >= 0 && n <= RTE_ETH_MQ_RX_VMDQ_DCB_RSS)
 					rx_mq_mode = (enum rte_eth_rx_mq_mode)n;
 				else
 					rte_exit(EXIT_FAILURE,
 						 "rx-mq-mode must be >= 0 and <= %d\n",
-						 ETH_MQ_RX_VMDQ_DCB_RSS);
+						 RTE_ETH_MQ_RX_VMDQ_DCB_RSS);
 			}
 			if (!strcmp(lgopts[opt_idx].name, "record-core-cycles"))
 				record_core_cycles = 1;
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 6cbe9ba3c893..30bf897d6da8 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -337,7 +337,7 @@ uint64_t noisy_lkup_num_reads_writes;
 /*
  * Receive Side Scaling (RSS) configuration.
  */
-uint64_t rss_hf = ETH_RSS_IP; /* RSS IP by default. */
+uint64_t rss_hf = RTE_ETH_RSS_IP; /* RSS IP by default. */
 
 /*
  * Port topology configuration
@@ -454,12 +454,12 @@ struct rte_eth_rxmode rx_mode = {
 };
 
 struct rte_eth_txmode tx_mode = {
-	.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE,
+	.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
 };
 
-struct rte_fdir_conf fdir_conf = {
+struct rte_eth_fdir_conf fdir_conf = {
 	.mode = RTE_FDIR_MODE_NONE,
-	.pballoc = RTE_FDIR_PBALLOC_64K,
+	.pballoc = RTE_ETH_FDIR_PBALLOC_64K,
 	.status = RTE_FDIR_REPORT_STATUS,
 	.mask = {
 		.vlan_tci_mask = 0xFFEF,
@@ -513,7 +513,7 @@ uint8_t gro_flush_cycles = GRO_DEFAULT_FLUSH_CYCLES;
 /*
  * hexadecimal bitmask of RX mq mode can be enabled.
  */
-enum rte_eth_rx_mq_mode rx_mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
+enum rte_eth_rx_mq_mode rx_mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
 
 /*
  * Used to set forced link speed
@@ -1437,9 +1437,9 @@ init_config_port_offloads(portid_t pid, uint32_t socket_id)
 			"Updating jumbo frame offload failed for port %u\n",
 			pid);
 
-	if (!(port->dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(port->dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		port->dev_conf.txmode.offloads &=
-			~DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Apply Rx offloads configuration */
 	for (i = 0; i < port->dev_info.max_rx_queues; i++)
@@ -1566,8 +1566,8 @@ init_config(void)
 
 	init_port_config();
 
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_UDP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_UDP_TSO;
 	/*
 	 * Records which Mbuf pool to use by each logical core, if needed.
 	 */
@@ -3154,7 +3154,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3414,17 +3414,17 @@ update_jumbo_frame_offload(portid_t portid)
 		port->dev_conf.rxmode.max_rx_pkt_len = RTE_ETHER_MTU + eth_overhead;
 
 	if (port->dev_conf.rxmode.max_rx_pkt_len <= RTE_ETHER_MTU + eth_overhead) {
-		rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		on = false;
 	} else {
-		if ((port->dev_info.rx_offload_capa & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+		if ((port->dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0) {
 			fprintf(stderr,
 				"Frame size (%u) is not supported by port %u\n",
 				port->dev_conf.rxmode.max_rx_pkt_len,
 				portid);
 			return -1;
 		}
-		rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		on = true;
 	}
 
@@ -3436,16 +3436,16 @@ update_jumbo_frame_offload(portid_t portid)
 		/* Apply JUMBO_FRAME offload configuration to Rx queue(s) */
 		for (qid = 0; qid < port->dev_info.nb_rx_queues; qid++) {
 			if (on)
-				port->rx_conf[qid].offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+				port->rx_conf[qid].offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			else
-				port->rx_conf[qid].offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+				port->rx_conf[qid].offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		}
 	}
 
 	/* If JUMBO_FRAME is set MTU conversion done by ethdev layer,
 	 * if unset do it here
 	 */
-	if ((rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) == 0) {
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0) {
 		ret = rte_eth_dev_set_mtu(portid,
 				port->dev_conf.rxmode.max_rx_pkt_len - eth_overhead);
 		if (ret)
@@ -3486,9 +3486,9 @@ init_port_config(void)
 			if( port->dev_conf.rx_adv_conf.rss_conf.rss_hf != 0)
 				port->dev_conf.rxmode.mq_mode =
 					(enum rte_eth_rx_mq_mode)
-						(rx_mq_mode & ETH_MQ_RX_RSS);
+						(rx_mq_mode & RTE_ETH_MQ_RX_RSS);
 			else
-				port->dev_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+				port->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 		}
 
 		rxtx_port_config(port);
@@ -3575,9 +3575,9 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		vmdq_rx_conf->enable_default_pool = 0;
 		vmdq_rx_conf->default_pool = 0;
 		vmdq_rx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 		vmdq_tx_conf->nb_queue_pools =
-			(num_tcs ==  ETH_4_TCS ? ETH_32_POOLS : ETH_16_POOLS);
+			(num_tcs ==  RTE_ETH_4_TCS ? RTE_ETH_32_POOLS : RTE_ETH_16_POOLS);
 
 		vmdq_rx_conf->nb_pool_maps = vmdq_rx_conf->nb_queue_pools;
 		for (i = 0; i < vmdq_rx_conf->nb_pool_maps; i++) {
@@ -3585,7 +3585,7 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 			vmdq_rx_conf->pool_map[i].pools =
 				1 << (i % vmdq_rx_conf->nb_queue_pools);
 		}
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			vmdq_rx_conf->dcb_tc[i] = i % num_tcs;
 			vmdq_tx_conf->dcb_tc[i] = i % num_tcs;
 		}
@@ -3593,8 +3593,8 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		/* set DCB mode of RX and TX of multiple queues */
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_VMDQ_DCB);
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+					(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB);
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 	} else {
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&eth_conf->rx_adv_conf.dcb_rx_conf;
@@ -3610,23 +3610,23 @@ get_eth_dcb_conf(portid_t pid, struct rte_eth_conf *eth_conf,
 		rx_conf->nb_tcs = num_tcs;
 		tx_conf->nb_tcs = num_tcs;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			rx_conf->dcb_tc[i] = i % num_tcs;
 			tx_conf->dcb_tc[i] = i % num_tcs;
 		}
 
 		eth_conf->rxmode.mq_mode =
 				(enum rte_eth_rx_mq_mode)
-					(rx_mq_mode & ETH_MQ_RX_DCB_RSS);
+					(rx_mq_mode & RTE_ETH_MQ_RX_DCB_RSS);
 		eth_conf->rx_adv_conf.rss_conf = rss_conf;
-		eth_conf->txmode.mq_mode = ETH_MQ_TX_DCB;
+		eth_conf->txmode.mq_mode = RTE_ETH_MQ_TX_DCB;
 	}
 
 	if (pfc_en)
 		eth_conf->dcb_capability_en =
-				ETH_DCB_PG_SUPPORT | ETH_DCB_PFC_SUPPORT;
+				RTE_ETH_DCB_PG_SUPPORT | RTE_ETH_DCB_PFC_SUPPORT;
 	else
-		eth_conf->dcb_capability_en = ETH_DCB_PG_SUPPORT;
+		eth_conf->dcb_capability_en = RTE_ETH_DCB_PG_SUPPORT;
 
 	return 0;
 }
@@ -3653,7 +3653,7 @@ init_port_dcb_config(portid_t pid,
 	retval = get_eth_dcb_conf(pid, &port_conf, dcb_mode, num_tcs, pfc_en);
 	if (retval < 0)
 		return retval;
-	port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	/* re-configure the device . */
 	retval = rte_eth_dev_configure(pid, nb_rxq, nb_rxq, &port_conf);
@@ -3703,7 +3703,7 @@ init_port_dcb_config(portid_t pid,
 
 	rxtx_port_config(rte_port);
 	/* VLAN filter */
-	rte_port->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rte_port->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	for (i = 0; i < RTE_DIM(vlan_tags); i++)
 		rx_vft_set(pid, vlan_tags[i], 1);
 
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 16a3598e48c5..e4ad8a6a7cff 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -446,7 +446,7 @@ extern lcoreid_t bitrate_lcore_id;
 extern uint8_t bitrate_enabled;
 #endif
 
-extern struct rte_fdir_conf fdir_conf;
+extern struct rte_eth_fdir_conf fdir_conf;
 
 /*
  * Configuration of packet segments used to scatter received packets
diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c
index aed820f5d340..5409d7a0deb0 100644
--- a/app/test-pmd/txonly.c
+++ b/app/test-pmd/txonly.c
@@ -352,11 +352,11 @@ pkt_burst_transmit(struct fwd_stream *fs)
 	tx_offloads = txp->dev_conf.txmode.offloads;
 	vlan_tci = txp->tx_vlan_id;
 	vlan_tci_outer = txp->tx_vlan_id_outer;
-	if (tx_offloads	& DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (tx_offloads	& RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ol_flags = PKT_TX_VLAN_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		ol_flags |= PKT_TX_QINQ_PKT;
-	if (tx_offloads & DEV_TX_OFFLOAD_MACSEC_INSERT)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MACSEC_INSERT)
 		ol_flags |= PKT_TX_MACSEC;
 
 	/*
diff --git a/app/test/test_ethdev_link.c b/app/test/test_ethdev_link.c
index ee11987bae28..6248aea49abd 100644
--- a/app/test/test_ethdev_link.c
+++ b/app/test/test_ethdev_link.c
@@ -14,10 +14,10 @@ test_link_status_up_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -27,9 +27,9 @@ test_link_status_up_default(void)
 	TEST_ASSERT_BUFFERS_ARE_EQUAL("Link up at 2.5 Gbps FDX Autoneg",
 		text, strlen(text), "Invalid default link status string");
 
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_FIXED;
-	link_status.link_speed = ETH_SPEED_NUM_10M,
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_FIXED;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_10M;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #2: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -37,7 +37,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_UNKNOWN;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -45,7 +45,7 @@ test_link_status_up_default(void)
 		text, strlen(text), "Invalid default link status "
 		"string with HDX");
 
-	link_status.link_speed = ETH_SPEED_NUM_NONE;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #3: %s\n", text);
 	RTE_TEST_ASSERT(ret > 0, "Failed to format default string\n");
@@ -54,9 +54,9 @@ test_link_status_up_default(void)
 		"string with HDX");
 
 	/* test max str len */
-	link_status.link_speed = ETH_SPEED_NUM_200G;
-	link_status.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link_status.link_autoneg = ETH_LINK_AUTONEG;
+	link_status.link_speed = RTE_ETH_SPEED_NUM_200G;
+	link_status.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link_status.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	ret = rte_eth_link_to_str(text, sizeof(text), &link_status);
 	printf("Default link up #4:len = %d, %s\n", ret, text);
 	RTE_TEST_ASSERT(ret < RTE_ETH_LINK_MAX_STR_LEN,
@@ -69,10 +69,10 @@ test_link_status_down_default(void)
 {
 	int ret = 0;
 	struct rte_eth_link link_status = {
-		.link_speed = ETH_SPEED_NUM_2_5G,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_speed = RTE_ETH_SPEED_NUM_2_5G,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -90,9 +90,9 @@ test_link_status_invalid(void)
 	int ret = 0;
 	struct rte_eth_link link_status = {
 		.link_speed = 55555,
-		.link_status = ETH_LINK_UP,
-		.link_autoneg = ETH_LINK_AUTONEG,
-		.link_duplex = ETH_LINK_FULL_DUPLEX
+		.link_status = RTE_ETH_LINK_UP,
+		.link_autoneg = RTE_ETH_LINK_AUTONEG,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX
 	};
 	char text[RTE_ETH_LINK_MAX_STR_LEN];
 
@@ -116,21 +116,21 @@ test_link_speed_all_values(void)
 		const char *value;
 		uint32_t link_speed;
 	} speed_str_map[] = {
-		{ "None",   ETH_SPEED_NUM_NONE },
-		{ "10 Mbps",  ETH_SPEED_NUM_10M },
-		{ "100 Mbps", ETH_SPEED_NUM_100M },
-		{ "1 Gbps",   ETH_SPEED_NUM_1G },
-		{ "2.5 Gbps", ETH_SPEED_NUM_2_5G },
-		{ "5 Gbps",   ETH_SPEED_NUM_5G },
-		{ "10 Gbps",  ETH_SPEED_NUM_10G },
-		{ "20 Gbps",  ETH_SPEED_NUM_20G },
-		{ "25 Gbps",  ETH_SPEED_NUM_25G },
-		{ "40 Gbps",  ETH_SPEED_NUM_40G },
-		{ "50 Gbps",  ETH_SPEED_NUM_50G },
-		{ "56 Gbps",  ETH_SPEED_NUM_56G },
-		{ "100 Gbps", ETH_SPEED_NUM_100G },
-		{ "200 Gbps", ETH_SPEED_NUM_200G },
-		{ "Unknown",  ETH_SPEED_NUM_UNKNOWN },
+		{ "None",   RTE_ETH_SPEED_NUM_NONE },
+		{ "10 Mbps",  RTE_ETH_SPEED_NUM_10M },
+		{ "100 Mbps", RTE_ETH_SPEED_NUM_100M },
+		{ "1 Gbps",   RTE_ETH_SPEED_NUM_1G },
+		{ "2.5 Gbps", RTE_ETH_SPEED_NUM_2_5G },
+		{ "5 Gbps",   RTE_ETH_SPEED_NUM_5G },
+		{ "10 Gbps",  RTE_ETH_SPEED_NUM_10G },
+		{ "20 Gbps",  RTE_ETH_SPEED_NUM_20G },
+		{ "25 Gbps",  RTE_ETH_SPEED_NUM_25G },
+		{ "40 Gbps",  RTE_ETH_SPEED_NUM_40G },
+		{ "50 Gbps",  RTE_ETH_SPEED_NUM_50G },
+		{ "56 Gbps",  RTE_ETH_SPEED_NUM_56G },
+		{ "100 Gbps", RTE_ETH_SPEED_NUM_100G },
+		{ "200 Gbps", RTE_ETH_SPEED_NUM_200G },
+		{ "Unknown",  RTE_ETH_SPEED_NUM_UNKNOWN },
 		{ "Invalid",   50505 }
 	};
 
diff --git a/app/test/test_event_eth_rx_adapter.c b/app/test/test_event_eth_rx_adapter.c
index 9198767b4194..bb7917010d62 100644
--- a/app/test/test_event_eth_rx_adapter.c
+++ b/app/test/test_event_eth_rx_adapter.c
@@ -106,7 +106,7 @@ port_init_rx_intr(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 		.intr_conf = {
 			.rxq = 1,
@@ -121,7 +121,7 @@ port_init(uint16_t port, struct rte_mempool *mp)
 {
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_NONE,
+			.mq_mode = RTE_ETH_MQ_RX_NONE,
 		},
 	};
 
diff --git a/app/test/test_kni.c b/app/test/test_kni.c
index 96733554b6c4..40ab0d5c4ca4 100644
--- a/app/test/test_kni.c
+++ b/app/test/test_kni.c
@@ -74,7 +74,7 @@ static const struct rte_eth_txconf tx_conf = {
 
 static const struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 8a5c8310a8b4..23c024aa1b0c 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -134,12 +134,12 @@ static uint16_t vlan_id = 0x100;
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.split_hdr_size = 0,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_mode4.c b/app/test/test_link_bonding_mode4.c
index 2c835fa7adc7..1556f14d6921 100644
--- a/app/test/test_link_bonding_mode4.c
+++ b/app/test/test_link_bonding_mode4.c
@@ -107,12 +107,12 @@ static struct link_bonding_unittest_params test_params  = {
 
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
diff --git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 5dac60ca1edd..cdf1c4fd259d 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -52,7 +52,7 @@ struct slave_conf {
 
 	struct rte_eth_rss_conf rss_conf;
 	uint8_t rss_key[40];
-	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t is_slave;
 	struct rte_ring *rxtx_queue[RXTX_QUEUE_COUNT];
@@ -61,7 +61,7 @@ struct slave_conf {
 struct link_bonding_rssconf_unittest_params {
 	uint8_t bond_port_id;
 	struct rte_eth_dev_info bond_dev_info;
-	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 bond_reta_conf[512 / RTE_ETH_RETA_GROUP_SIZE];
 	struct slave_conf slave_ports[SLAVE_COUNT];
 
 	struct rte_mempool *mbuf_pool;
@@ -80,29 +80,29 @@ static struct link_bonding_rssconf_unittest_params test_params  = {
  */
 static struct rte_eth_conf default_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
 static struct rte_eth_conf rss_pmd_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IPV6,
+			.rss_hf = RTE_ETH_RSS_IPV6,
 		},
 	},
 	.lpbk_mode = 0,
@@ -209,13 +209,13 @@ bond_slaves(void)
 static int
 reta_set(uint16_t port_id, uint8_t value, int reta_size)
 {
-	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[512/RTE_ETH_RETA_GROUP_SIZE];
 	int i, j;
 
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++) {
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++) {
 		/* select all fields to set */
 		reta_conf[i].mask = ~0LL;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			reta_conf[i].reta[j] = value;
 	}
 
@@ -234,8 +234,8 @@ reta_check_synced(struct slave_conf *port)
 	for (i = 0; i < test_params.bond_dev_info.reta_size;
 			i++) {
 
-		int index = i / RTE_RETA_GROUP_SIZE;
-		int shift = i % RTE_RETA_GROUP_SIZE;
+		int index = i / RTE_ETH_RETA_GROUP_SIZE;
+		int shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (port->reta_conf[index].reta[shift] !=
 				test_params.bond_reta_conf[index].reta[shift])
@@ -253,7 +253,7 @@ static int
 bond_reta_fetch(void) {
 	unsigned j;
 
-	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_RETA_GROUP_SIZE;
+	for (j = 0; j < test_params.bond_dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE;
 			j++)
 		test_params.bond_reta_conf[j].mask = ~0LL;
 
@@ -270,7 +270,7 @@ static int
 slave_reta_fetch(struct slave_conf *port) {
 	unsigned j;
 
-	for (j = 0; j < port->dev_info.reta_size / RTE_RETA_GROUP_SIZE; j++)
+	for (j = 0; j < port->dev_info.reta_size / RTE_ETH_RETA_GROUP_SIZE; j++)
 		port->reta_conf[j].mask = ~0LL;
 
 	TEST_ASSERT_SUCCESS(rte_eth_dev_rss_reta_query(port->port_id,
diff --git a/app/test/test_pmd_perf.c b/app/test/test_pmd_perf.c
index 3a248d512c4a..da7b7ad1f7cc 100644
--- a/app/test/test_pmd_perf.c
+++ b/app/test/test_pmd_perf.c
@@ -62,12 +62,12 @@ static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 1,  /* enable loopback */
 };
@@ -156,7 +156,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -823,7 +823,7 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		/* bulk alloc rx, full-featured tx */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "hybrid")) {
 		/* bulk alloc rx, vector tx
@@ -832,13 +832,13 @@ test_set_rxtx_conf(cmdline_fixed_string_t mode)
 		 */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 		return 0;
 	} else if (!strcmp(mode, "full")) {
 		/* full feature rx,tx pair */
 		tx_conf.tx_rs_thresh = 32;
 		tx_conf.tx_free_thresh = 32;
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		return 0;
 	}
 
diff --git a/app/test/virtual_pmd.c b/app/test/virtual_pmd.c
index 7036f401ed95..6eecfa385537 100644
--- a/app/test/virtual_pmd.c
+++ b/app/test/virtual_pmd.c
@@ -53,7 +53,7 @@ static int  virtual_ethdev_stop(struct rte_eth_dev *eth_dev __rte_unused)
 	void *pkt = NULL;
 	struct virtual_ethdev_private *prv = eth_dev->data->dev_private;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 	while (rte_ring_dequeue(prv->rx_queue, &pkt) != -ENOENT)
 		rte_pktmbuf_free(pkt);
@@ -178,7 +178,7 @@ virtual_ethdev_link_update_success(struct rte_eth_dev *bonded_eth_dev,
 		int wait_to_complete __rte_unused)
 {
 	if (!bonded_eth_dev->data->dev_started)
-		bonded_eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		bonded_eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -574,9 +574,9 @@ virtual_ethdev_create(const char *name, struct rte_ether_addr *mac_addr,
 	eth_dev->data->nb_rx_queues = (uint16_t)1;
 	eth_dev->data->nb_tx_queues = (uint16_t)1;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
-	eth_dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	eth_dev->data->mac_addrs = rte_zmalloc(name, RTE_ETHER_ADDR_LEN, 0);
 	if (eth_dev->data->mac_addrs == NULL)
diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst
index 53560d3830d7..1c0ea988f239 100644
--- a/doc/guides/eventdevs/cnxk.rst
+++ b/doc/guides/eventdevs/cnxk.rst
@@ -42,7 +42,7 @@ Features of the OCTEON cnxk SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue configuration.
 - HW managed event vectorization on CN10K for packets enqueued from ethdev to
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
index 11fbebfcd243..0fa57abfa3e0 100644
--- a/doc/guides/eventdevs/octeontx2.rst
+++ b/doc/guides/eventdevs/octeontx2.rst
@@ -35,7 +35,7 @@ Features of the OCTEON TX2 SSO PMD are:
 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
   RX adapter.
 - N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE``
+- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
   capability while maintaining receive packet order.
 - Full Rx/Tx offload support defined through ethdev queue config.
 
diff --git a/doc/guides/howto/debug_troubleshoot.rst b/doc/guides/howto/debug_troubleshoot.rst
index 457ac441429a..13f30e39363e 100644
--- a/doc/guides/howto/debug_troubleshoot.rst
+++ b/doc/guides/howto/debug_troubleshoot.rst
@@ -71,7 +71,7 @@ RX Port and associated core :numref:`dtg_rx_rate`.
    * Identify if port Speed and Duplex is matching to desired values with
      ``rte_eth_link_get``.
 
-   * Check ``DEV_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
+   * Check ``RTE_ETH_RX_OFFLOAD_JUMBO_FRAME`` is set with ``rte_eth_dev_info_get``.
 
    * Check promiscuous mode if the drops do not occur for unique MAC address
      with ``rte_eth_promiscuous_get``.
diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst
index e75f4fa9e3bc..77827e750195 100644
--- a/doc/guides/nics/bnxt.rst
+++ b/doc/guides/nics/bnxt.rst
@@ -877,22 +877,22 @@ processing. This improved performance is derived from a number of optimizations:
     * TX: only the following reduced set of transmit offloads is supported in
       vector mode::
 
-       DEV_TX_OFFLOAD_MBUF_FAST_FREE
+       RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 
     * RX: only the following reduced set of receive offloads is supported in
       vector mode (note that jumbo MTU is allowed only when the MTU setting
-      does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::
-
-       DEV_RX_OFFLOAD_VLAN_STRIP
-       DEV_RX_OFFLOAD_KEEP_CRC
-       DEV_RX_OFFLOAD_JUMBO_FRAME
-       DEV_RX_OFFLOAD_IPV4_CKSUM
-       DEV_RX_OFFLOAD_UDP_CKSUM
-       DEV_RX_OFFLOAD_TCP_CKSUM
-       DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
-       DEV_RX_OFFLOAD_OUTER_UDP_CKSUM
-       DEV_RX_OFFLOAD_RSS_HASH
-       DEV_RX_OFFLOAD_VLAN_FILTER
+      does not require `RTE_ETH_RX_OFFLOAD_SCATTER` to be enabled)::
+
+       RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+       RTE_ETH_RX_OFFLOAD_KEEP_CRC
+       RTE_ETH_RX_OFFLOAD_JUMBO_FRAME
+       RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+       RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+       RTE_ETH_RX_OFFLOAD_RSS_HASH
+       RTE_ETH_RX_OFFLOAD_VLAN_FILTER
 
 The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable
 vector processing is made at run-time when the port is started; if no transmit
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 91bdcd065a95..0209730b904a 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -432,7 +432,7 @@ Limitations
 .. code-block:: console
 
      vlan_offload = rte_eth_dev_get_vlan_offload(port);
-     vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
+     vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
      rte_eth_dev_set_vlan_offload(port, vlan_offload);
 
 Another alternative is modify the adapter's ingress VLAN rewrite mode so that
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index a96e12d15515..7f7d6ae45658 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -30,7 +30,7 @@ Speed capabilities
 
 Supports getting the speed capabilities that the current device is capable of.
 
-* **[provides] rte_eth_dev_info**: ``speed_capa:ETH_LINK_SPEED_*``.
+* **[provides] rte_eth_dev_info**: ``speed_capa:RTE_ETH_LINK_SPEED_*``.
 * **[related]  API**: ``rte_eth_dev_info_get()``.
 
 
@@ -101,11 +101,11 @@ Supports Rx interrupts.
 Lock-free Tx queue
 ------------------
 
-If a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+If a PMD advertises RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
 invoke rte_eth_tx_burst() concurrently on the same Tx queue without SW lock.
 
-* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MT_LOCKFREE``.
-* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MT_LOCKFREE``.
+* **[uses]    rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
+* **[provides] rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``.
 * **[related]  API**: ``rte_eth_tx_burst()``.
 
 
@@ -117,8 +117,8 @@ Fast mbuf free
 Supports optimization for fast release of mbufs following successful Tx.
 Requires that per queue, all mbufs come from the same mempool and has refcnt = 1.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE``.
 
 
 .. _nic_features_free_tx_mbuf_on_demand:
@@ -165,7 +165,7 @@ Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -178,7 +178,7 @@ Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,12 +206,12 @@ LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
   ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_TCP_LRO``.
 * **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
 
@@ -222,12 +222,12 @@ TSO
 
 Supports TCP Segmentation Offloading.
 
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_TCP_TSO``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_TCP_TSO``.
 * **[uses]       rte_eth_desc_lim**: ``nb_seg_max``, ``nb_mtu_seg_max``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:`` ``PKT_TX_TCP_SEG``, ``PKT_TX_IPV4``, ``PKT_TX_IPV6``, ``PKT_TX_IP_CKSUM``.
 * **[uses]       mbuf**: ``mbuf.tso_segsz``, ``mbuf.l2_len``, ``mbuf.l3_len``, ``mbuf.l4_len``.
 * **[implements] datapath**: ``TSO functionality``.
-* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_TCP_TSO,DEV_TX_OFFLOAD_UDP_TSO``.
+* **[provides]   rte_eth_dev_info**: ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_TCP_TSO,RTE_ETH_TX_OFFLOAD_UDP_TSO``.
 
 
 .. _nic_features_promiscuous_mode:
@@ -288,9 +288,9 @@ RSS hash
 
 Supports RSS hashing on RX.
 
-* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_RSS_FLAG``.
+* **[uses]     user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_RSS_FLAG``.
 * **[uses]     user config**: ``dev_conf.rx_adv_conf.rss_conf``.
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] rte_eth_dev_info**: ``flow_type_rss_offloads``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
@@ -303,7 +303,7 @@ Inner RSS
 Supports RX RSS hashing on Inner headers.
 
 * **[uses]    rte_flow_action_rss**: ``level``.
-* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_RSS_HASH``.
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_RSS_HASH``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_RSS_HASH``, ``mbuf.rss``.
 
 
@@ -340,7 +340,7 @@ VMDq
 
 Supports Virtual Machine Device Queues (VMDq).
 
-* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_VMDQ_FLAG``.
+* **[uses] user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_VMDQ_FLAG``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses] user config**: ``dev_conf.rx_adv_conf.vmdq_rx_conf``.
 * **[uses] user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -363,7 +363,7 @@ DCB
 
 Supports Data Center Bridging (DCB).
 
-* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``ETH_MQ_RX_DCB_FLAG``.
+* **[uses]       user config**: ``dev_conf.rxmode.mq_mode`` = ``RTE_ETH_MQ_RX_DCB_FLAG``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.vmdq_dcb_conf``.
 * **[uses]       user config**: ``dev_conf.rx_adv_conf.dcb_rx_conf``.
 * **[uses]       user config**: ``dev_conf.tx_adv_conf.vmdq_dcb_tx_conf``.
@@ -379,7 +379,7 @@ VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -428,12 +428,12 @@ Supports inline crypto processing defined by rte_security library to perform cry
 operations of security protocol while packet is received in NIC. NIC is not aware
 of protocol operations. See Security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_CRYPTO``
@@ -449,13 +449,13 @@ protocol processing for the security protocol (e.g. IPsec, MACSEC) while the
 packet is received at NIC. The NIC is capable of understanding the security
 protocol operations. See security library and PMD documentation for more details.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SECURITY``,
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_SECURITY``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_SECURITY``,
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[implements] rte_security_ops**: ``session_create``, ``session_update``,
   ``session_stats_get``, ``session_destroy``, ``set_pkt_metadata``, ``get_userdata``,
   ``capabilities_get``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_SECURITY``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_SECURITY``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_SECURITY``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_SECURITY``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD``,
   ``mbuf.ol_flags:PKT_TX_SEC_OFFLOAD``, ``mbuf.ol_flags:PKT_RX_SEC_OFFLOAD_FAILED``.
 * **[provides]   rte_security_ops, capabilities_get**:  ``action: RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL``
@@ -469,7 +469,7 @@ CRC offload
 Supports CRC stripping by hardware.
 A PMD assumed to support CRC stripping by default. PMD should advertise if it supports keeping CRC.
 
-* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_KEEP_CRC``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_KEEP_CRC``.
 
 
 .. _nic_features_vlan_offload:
@@ -479,13 +479,13 @@ VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
-* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_VLAN_STRIP,RTE_ETH_RX_OFFLOAD_VLAN_FILTER,RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``.
+* **[uses]       rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[uses]       mbuf**: ``mbuf.ol_flags:PKT_TX_VLAN``, ``mbuf.vlan_tci``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN`` ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_VLAN_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
 
@@ -497,14 +497,14 @@ QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ``, ``mbuf.vlan_tci_outer``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.ol_flags:PKT_RX_QINQ``,
   ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.ol_flags:PKT_RX_VLAN``
   ``mbuf.vlan_tci``, ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_QINQ_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_QINQ_INSERT``.
 
 
 .. _nic_features_fec:
@@ -518,7 +518,7 @@ information to correct the bit errors generated during data packet transmission
 improves signal quality but also brings a delay to signals. This function can be enabled or disabled as required.
 
 * **[implements] eth_dev_ops**: ``fec_get_capability``, ``fec_get``, ``fec_set``.
-* **[provides]   rte_eth_fec_capa**: ``speed:ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
+* **[provides]   rte_eth_fec_capa**: ``speed:RTE_ETH_SPEED_NUM_*``, ``capa:RTE_ETH_FEC_MODE_TO_CAPA()``.
 * **[related]    API**: ``rte_eth_fec_get_capability()``, ``rte_eth_fec_get()``, ``rte_eth_fec_set()``.
 
 
@@ -529,16 +529,16 @@ L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[uses]     mbuf**: ``mbuf.l2_len``, ``mbuf.l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_IPV4_CKSUM``.
 
 
 .. _nic_features_l4_checksum_offload:
@@ -548,8 +548,8 @@ L4 checksum offload
 
 Supports L4 checksum offload.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
@@ -557,8 +557,8 @@ Supports L4 checksum offload.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM,DEV_RX_OFFLOAD_SCTP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_UDP_CKSUM,RTE_ETH_RX_OFFLOAD_TCP_CKSUM,RTE_ETH_RX_OFFLOAD_SCTP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_UDP_CKSUM,RTE_ETH_TX_OFFLOAD_TCP_CKSUM,RTE_ETH_TX_OFFLOAD_SCTP_CKSUM``.
 
 .. _nic_features_hw_timestamp:
 
@@ -567,10 +567,10 @@ Timestamp offload
 
 Supports Timestamp.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_TIMESTAMP``.
 * **[provides] mbuf**: ``mbuf.timestamp``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: DEV_RX_OFFLOAD_TIMESTAMP``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa: RTE_ETH_RX_OFFLOAD_TIMESTAMP``.
 * **[related] eth_dev_ops**: ``read_clock``.
 
 .. _nic_features_macsec_offload:
@@ -580,11 +580,11 @@ MACsec offload
 
 Supports MACsec.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_MACSEC_STRIP``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_MACSEC_INSERT``.
 
 
 .. _nic_features_inner_l3_checksum:
@@ -594,16 +594,16 @@ Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_IP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
 .. _nic_features_inner_l4_checksum:
@@ -613,15 +613,15 @@ Inner L4 checksum
 
 Supports inner packet L4 checksum.
 
-* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_OUTER_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_OUTER_L4_CKSUM_BAD`` | ``PKT_RX_OUTER_L4_CKSUM_GOOD`` | ``PKT_RX_OUTER_L4_CKSUM_INVALID``.
-* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[uses]     rte_eth_txconf,rte_eth_txmode**: ``offloads:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
   ``mbuf.ol_flags:PKT_TX_OUTER_UDP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_UDP_CKSUM``,
-  ``tx_offload_capa,tx_queue_offload_capa:DEV_TX_OFFLOAD_OUTER_UDP_CKSUM``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM``,
+  ``tx_offload_capa,tx_queue_offload_capa:RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM``.
 
 
 .. _nic_features_packet_type_parsing:
diff --git a/doc/guides/nics/fm10k.rst b/doc/guides/nics/fm10k.rst
index 7b8ef0e7823d..3dff65d89b6d 100644
--- a/doc/guides/nics/fm10k.rst
+++ b/doc/guides/nics/fm10k.rst
@@ -78,11 +78,11 @@ To enable via ``RX_OLFLAGS`` use ``RTE_LIBRTE_FM10K_RX_OLFLAGS_ENABLE=y``.
 To guarantee the constraint, the following capabilities in ``dev_conf.rxmode.offloads``
 will be checked:
 
-*   ``DEV_RX_OFFLOAD_VLAN_EXTEND``
+*   ``RTE_ETH_RX_OFFLOAD_VLAN_EXTEND``
 
-*   ``DEV_RX_OFFLOAD_CHECKSUM``
+*   ``RTE_ETH_RX_OFFLOAD_CHECKSUM``
 
-*   ``DEV_RX_OFFLOAD_HEADER_SPLIT``
+*   ``RTE_ETH_RX_OFFLOAD_HEADER_SPLIT``
 
 *   ``fdir_conf->mode``
 
diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst
index fcea8151bf3c..e60e3b2a761d 100644
--- a/doc/guides/nics/intel_vf.rst
+++ b/doc/guides/nics/intel_vf.rst
@@ -222,21 +222,21 @@ For example,
     *   If the max number of VFs (max_vfs) is set in the range of 1 to 32:
 
         If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 4 Rx queues;
 
         If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
-        pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
+        pools (RTE_ETH_32_POOLS), and each VF could have 2 Rx queues;
 
     *   If the max number of VFs (max_vfs) is in the range of 33 to 64:
 
         If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
         as ``rxq`` is not correct at this case;
 
-        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
+        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (RTE_ETH_64_POOLS),
         and each VF have 2 Rx queues;
 
-    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
-    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
+    On host, to enable VF RSS functionality, rx mq mode should be set as RTE_ETH_MQ_RX_VMDQ_RSS
+    or RTE_ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
     It also needs config VF RSS information like hash function, RSS key, RSS key length.
 
 .. note::
diff --git a/doc/guides/nics/ixgbe.rst b/doc/guides/nics/ixgbe.rst
index b82e63438285..24fbccc982f5 100644
--- a/doc/guides/nics/ixgbe.rst
+++ b/doc/guides/nics/ixgbe.rst
@@ -69,13 +69,13 @@ Other features are supported using optional MACRO configuration. They include:
 
 To guarantee the constraint, capabilities in dev_conf.rxmode.offloads will be checked:
 
-*   DEV_RX_OFFLOAD_VLAN_STRIP
+*   RTE_ETH_RX_OFFLOAD_VLAN_STRIP
 
-*   DEV_RX_OFFLOAD_VLAN_EXTEND
+*   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
 
-*   DEV_RX_OFFLOAD_CHECKSUM
+*   RTE_ETH_RX_OFFLOAD_CHECKSUM
 
-*   DEV_RX_OFFLOAD_HEADER_SPLIT
+*   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
 
 *   dev_conf
 
@@ -143,13 +143,13 @@ l3fwd
 ~~~~~
 
 When running l3fwd with vPMD, there is one thing to note.
-In the configuration, ensure that DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
+In the configuration, ensure that RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads is NOT set.
 Otherwise, by default, RX vPMD is disabled.
 
 load_balancer
 ~~~~~~~~~~~~~
 
-As in the case of l3fwd, to enable vPMD, do NOT set DEV_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
+As in the case of l3fwd, to enable vPMD, do NOT set RTE_ETH_RX_OFFLOAD_CHECKSUM in port_conf.rxmode.offloads.
 In addition, for improved performance, use -bsz "(32,32),(64,64),(32,32)" in load_balancer to avoid using the default burst size of 144.
 
 
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index bae73f42d882..6facb68b9545 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -371,7 +371,7 @@ Limitations
 
 - CRC:
 
-  - ``DEV_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
+  - ``RTE_ETH_RX_OFFLOAD_KEEP_CRC`` cannot be supported with decapsulation
     for some NICs (such as ConnectX-6 Dx, ConnectX-6 Lx, and BlueField-2).
     The capability bit ``scatter_fcs_w_decap_disable`` shows NIC support.
 
@@ -607,7 +607,7 @@ Driver options
   small-packet traffic.
 
   When MPRQ is enabled, max_rx_pkt_len can be larger than the size of
-  user-provided mbuf even if DEV_RX_OFFLOAD_SCATTER isn't enabled. PMD will
+  user-provided mbuf even if RTE_ETH_RX_OFFLOAD_SCATTER isn't enabled. PMD will
   configure large stride size enough to accommodate max_rx_pkt_len as long as
   device allows. Note that this can waste system memory compared to enabling Rx
   scatter and multi-segment packet.
diff --git a/doc/guides/nics/tap.rst b/doc/guides/nics/tap.rst
index 3ce696b605d1..681010d9ed7d 100644
--- a/doc/guides/nics/tap.rst
+++ b/doc/guides/nics/tap.rst
@@ -275,7 +275,7 @@ An example utility for eBPF instruction generation in the format of C arrays wil
 be added in next releases
 
 TAP reports on supported RSS functions as part of dev_infos_get callback:
-``ETH_RSS_IP``, ``ETH_RSS_UDP`` and ``ETH_RSS_TCP``.
+``RTE_ETH_RSS_IP``, ``RTE_ETH_RSS_UDP`` and ``RTE_ETH_RSS_TCP``.
 **Known limitation:** TAP supports all of the above hash functions together
 and not in partial combinations.
 
diff --git a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
index 7bff0aef0b74..9b2c31a2f0bc 100644
--- a/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
+++ b/doc/guides/prog_guide/generic_segmentation_offload_lib.rst
@@ -194,11 +194,11 @@ To segment an outgoing packet, an application must:
 
    - the bit mask of required GSO types. The GSO library uses the same macros as
      those that describe a physical device's TX offloading capabilities (i.e.
-     ``DEV_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
+     ``RTE_ETH_TX_OFFLOAD_*_TSO``) for gso_types. For example, if an application
      wants to segment TCP/IPv4 packets, it should set gso_types to
-     ``DEV_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
-     supported for gso_types are ``DEV_TX_OFFLOAD_VXLAN_TNL_TSO``, and
-     ``DEV_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
+     ``RTE_ETH_TX_OFFLOAD_TCP_TSO``. The only other supported values currently
+     supported for gso_types are ``RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO``, and
+     ``RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO``; a combination of these macros is also
      allowed.
 
    - a flag, that indicates whether the IPv4 headers of output segments should
diff --git a/doc/guides/prog_guide/mbuf_lib.rst b/doc/guides/prog_guide/mbuf_lib.rst
index 2f190b40e43a..dc6186a44ae2 100644
--- a/doc/guides/prog_guide/mbuf_lib.rst
+++ b/doc/guides/prog_guide/mbuf_lib.rst
@@ -137,7 +137,7 @@ a vxlan-encapsulated tcp packet:
     mb->ol_flags |= PKT_TX_IPV4 | PKT_TX_IP_CSUM
     set out_ip checksum to 0 in the packet
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
 
 - calculate checksum of out_ip and out_udp::
 
@@ -147,8 +147,8 @@ a vxlan-encapsulated tcp packet:
     set out_ip checksum to 0 in the packet
     set out_udp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM
-  and DEV_TX_OFFLOAD_UDP_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+  and RTE_ETH_TX_OFFLOAD_UDP_CKSUM.
 
 - calculate checksum of in_ip::
 
@@ -158,7 +158,7 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
 
   This is similar to case 1), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of in_ip and in_tcp::
@@ -170,8 +170,8 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
   This is similar to case 2), but l2_len is different. It is supported
-  on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM and
-  DEV_TX_OFFLOAD_TCP_CKSUM.
+  on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM and
+  RTE_ETH_TX_OFFLOAD_TCP_CKSUM.
   Note that it can only work if outer L4 checksum is 0.
 
 - segment inner TCP::
@@ -185,7 +185,7 @@ a vxlan-encapsulated tcp packet:
     set in_tcp checksum to pseudo header without including the IP
       payload length using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_TCP_TSO.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_TCP_TSO.
   Note that it can only work if outer L4 checksum is 0.
 
 - calculate checksum of out_ip, in_ip, in_tcp::
@@ -200,8 +200,8 @@ a vxlan-encapsulated tcp packet:
     set in_ip checksum to 0 in the packet
     set in_tcp checksum to pseudo header using rte_ipv4_phdr_cksum()
 
-  This is supported on hardware advertising DEV_TX_OFFLOAD_IPV4_CKSUM,
-  DEV_TX_OFFLOAD_UDP_CKSUM and DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM.
+  This is supported on hardware advertising RTE_ETH_TX_OFFLOAD_IPV4_CKSUM,
+  RTE_ETH_TX_OFFLOAD_UDP_CKSUM and RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM.
 
 The list of flags and their precise meaning is described in the mbuf API
 documentation (rte_mbuf.h). Also refer to the testpmd source code
diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 0d4ac77a7ccf..68312898448c 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced
 
 Avoiding lock contention is a key issue in a multi-core environment.
 To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable.
 In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
 
 To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo
 
 Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
 
-If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
 concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases:
 
 *  Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation.
@@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som
 *  In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus
    enables more scaling as all workers can send the packets.
 
-See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
+See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details.
 
 Device Identification, Ownership and Configuration
 --------------------------------------------------
@@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get(
 The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities.
 Supported offloads can be either per-port or per-queue.
 
-Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags.
+Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags.
 Any requested offloading by an application must be within the device capabilities.
 Any offloading is disabled by default if it is not set in the parameter
 ``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 2b42d5ec8c05..1bac8f04b96e 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1835,23 +1835,23 @@ only matching traffic goes through.
 
 .. table:: RSS
 
-   +---------------+---------------------------------------------+
-   | Field         | Value                                       |
-   +===============+=============================================+
-   | ``func``      | RSS hash function to apply                  |
-   +---------------+---------------------------------------------+
-   | ``level``     | encapsulation level for ``types``           |
-   +---------------+---------------------------------------------+
-   | ``types``     | specific RSS hash types (see ``ETH_RSS_*``) |
-   +---------------+---------------------------------------------+
-   | ``key_len``   | hash key length in bytes                    |
-   +---------------+---------------------------------------------+
-   | ``queue_num`` | number of entries in ``queue``              |
-   +---------------+---------------------------------------------+
-   | ``key``       | hash key                                    |
-   +---------------+---------------------------------------------+
-   | ``queue``     | queue indices to use                        |
-   +---------------+---------------------------------------------+
+   +---------------+-------------------------------------------------+
+   | Field         | Value                                           |
+   +===============+=================================================+
+   | ``func``      | RSS hash function to apply                      |
+   +---------------+-------------------------------------------------+
+   | ``level``     | encapsulation level for ``types``               |
+   +---------------+-------------------------------------------------+
+   | ``types``     | specific RSS hash types (see ``RTE_ETH_RSS_*``) |
+   +---------------+-------------------------------------------------+
+   | ``key_len``   | hash key length in bytes                        |
+   +---------------+-------------------------------------------------+
+   | ``queue_num`` | number of entries in ``queue``                  |
+   +---------------+-------------------------------------------------+
+   | ``key``       | hash key                                        |
+   +---------------+-------------------------------------------------+
+   | ``queue``     | queue indices to use                            |
+   +---------------+-------------------------------------------------+
 
 Action: ``PF``
 ^^^^^^^^^^^^^^
diff --git a/doc/guides/prog_guide/rte_security.rst b/doc/guides/prog_guide/rte_security.rst
index f72bc8a78fa6..e3bd451917f0 100644
--- a/doc/guides/prog_guide/rte_security.rst
+++ b/doc/guides/prog_guide/rte_security.rst
@@ -560,7 +560,7 @@ created by the application is attached to the security session by the API
 
 For Inline Crypto and Inline protocol offload, device specific defined metadata is
 updated in the mbuf using ``rte_security_set_pkt_metadata()`` if
-``DEV_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
+``RTE_ETH_TX_OFFLOAD_SEC_NEED_MDATA`` is set.
 
 For inline protocol offloaded ingress traffic, the application can register a
 pointer, ``userdata`` , in the security session. When the packet is received,
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b0b..20159a1c9a90 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -58,22 +58,16 @@ Deprecation Notices
   ``RTE_ETH_FLOW_MAX`` is one sample of the mentioned case, adding a new flow
   type will break the ABI because of ``flex_mask[RTE_ETH_FLOW_MAX]`` array
   usage in following public struct hierarchy:
-  ``rte_eth_fdir_flex_conf -> rte_fdir_conf -> rte_eth_conf (in the middle)``.
+  ``rte_eth_fdir_flex_conf -> rte_eth_fdir_conf -> rte_eth_conf (in the middle)``.
   Need to identify this kind of usages and fix in 20.11, otherwise this blocks
   us extending existing enum/define.
   One solution can be using a fixed size array instead of ``.*MAX.*`` value.
 
-* ethdev: Will add ``RTE_ETH_`` prefix to all ethdev macros/enums in v21.11.
-  Macros will be added for backward compatibility.
-  Backward compatibility macros will be removed on v22.11.
-  A few old backward compatibility macros from 2013 that does not have
-  proper prefix will be removed on v21.11.
-
 * ethdev: The flow director API, including ``rte_eth_conf.fdir_conf`` field,
   and the related structures (``rte_fdir_*`` and ``rte_eth_fdir_*``),
   will be removed in DPDK 20.11.
 
-* ethdev: New offload flags ``DEV_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
+* ethdev: New offload flags ``RTE_ETH_RX_OFFLOAD_FLOW_MARK`` will be added in 19.11.
   This will allow application to enable or disable PMDs from updating
   ``rte_mbuf::hash::fdir``.
   This scheme will allow PMDs to avoid writes to ``rte_mbuf`` fields on Rx and
@@ -98,7 +92,7 @@ Deprecation Notices
   either by ``rte_eth_dev_configure()`` or ``rte_eth_dev_set_mtu()``.
 
   An application may need to configure device for a specific Rx packet size, like for
-  cases ``DEV_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
+  cases ``RTE_ETH_RX_OFFLOAD_SCATTER`` is not supported and device received packet size
   can't be bigger than Rx buffer size.
   To cover these cases an application needs to know the device packet overhead to be
   able to calculate the ``mtu`` corresponding to a Rx buffer size, for this
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a554efaf..daff4de36a76 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -100,6 +100,9 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* ethdev: All enums & macros updated to have ``RTE_ETH`` prefix and structures
+  updated to have ``rte_eth`` prefix. DPDK components updated to use new names.
+
 
 Known Issues
 ------------
diff --git a/doc/guides/sample_app_ug/ipsec_secgw.rst b/doc/guides/sample_app_ug/ipsec_secgw.rst
index 78171b25f96e..782574dd39d5 100644
--- a/doc/guides/sample_app_ug/ipsec_secgw.rst
+++ b/doc/guides/sample_app_ug/ipsec_secgw.rst
@@ -209,12 +209,12 @@ Where:
     device will ensure the ordering. Ordering will be lost when tried in PARALLEL.
 
 *   ``--rxoffload MASK``: RX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_RX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_RX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the RX HW offload capabilities.
     By default all HW RX offloads are enabled.
 
 *   ``--txoffload MASK``: TX HW offload capabilities to enable/use on this port
-    (bitmask of DEV_TX_OFFLOAD_* values). It is an optional parameter and
+    (bitmask of RTE_ETH_TX_OFFLOAD_* values). It is an optional parameter and
     allows user to disable some of the TX HW offload capabilities.
     By default all HW TX offloads are enabled.
 
diff --git a/doc/guides/testpmd_app_ug/run_app.rst b/doc/guides/testpmd_app_ug/run_app.rst
index 6061674239f4..d7f5951d4639 100644
--- a/doc/guides/testpmd_app_ug/run_app.rst
+++ b/doc/guides/testpmd_app_ug/run_app.rst
@@ -526,7 +526,7 @@ The command line options are:
     Set the hexadecimal bitmask of RX multi queue mode which can be enabled.
     The default value is 0x7::
 
-       ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG | ETH_MQ_RX_VMDQ_FLAG
+       RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG
 
 *   ``--record-core-cycles``
 
diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h
index be52e6f72dab..a922988607ef 100644
--- a/drivers/bus/dpaa/include/process.h
+++ b/drivers/bus/dpaa/include/process.h
@@ -90,20 +90,20 @@ int dpaa_intr_disable(char *if_name);
 struct usdpaa_ioctl_link_status_args_old {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
-	/* link autoneg (ETH_LINK_AUTONEG/FIXED)*/
+	/* link autoneg (RTE_ETH_LINK_AUTONEG/FIXED)*/
 	int     link_autoneg;
 
 };
@@ -111,16 +111,16 @@ struct usdpaa_ioctl_link_status_args {
 struct usdpaa_ioctl_update_link_status_args {
 	/* network device node name */
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link status(ETH_LINK_UP/DOWN) */
+	/* link status(RTE_ETH_LINK_UP/DOWN) */
 	int     link_status;
 };
 
 struct usdpaa_ioctl_update_link_speed {
 	/* network device node name*/
 	char    if_name[IF_NAME_MAX_LEN];
-	/* link speed (ETH_SPEED_NUM_)*/
+	/* link speed (RTE_ETH_SPEED_NUM_)*/
 	int     link_speed;
-	/* link duplex (ETH_LINK_[HALF/FULL]_DUPLEX)*/
+	/* link duplex (RTE_ETH_LINK_[HALF/FULL]_DUPLEX)*/
 	int     link_duplex;
 };
 
diff --git a/drivers/common/cnxk/roc_npc.h b/drivers/common/cnxk/roc_npc.h
index bab25fd72eee..360bf75d3861 100644
--- a/drivers/common/cnxk/roc_npc.h
+++ b/drivers/common/cnxk/roc_npc.h
@@ -153,7 +153,7 @@ enum roc_npc_rss_hash_function {
 struct roc_npc_action_rss {
 	enum roc_npc_rss_hash_function func;
 	uint32_t level;
-	uint64_t types;	       /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types;	       /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len;      /**< Hash key length in bytes. */
 	uint32_t queue_num;    /**< Number of entries in @p queue. */
 	const uint8_t *key;    /**< Hash key. */
diff --git a/drivers/net/af_packet/rte_eth_af_packet.c b/drivers/net/af_packet/rte_eth_af_packet.c
index b73b211fd249..fb5d549e6227 100644
--- a/drivers/net/af_packet/rte_eth_af_packet.c
+++ b/drivers/net/af_packet/rte_eth_af_packet.c
@@ -91,10 +91,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(af_packet_logtype, NOTICE);
@@ -265,7 +265,7 @@ eth_af_packet_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -295,7 +295,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 		internals->tx_queue[i].sockfd = -1;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -316,8 +316,8 @@ eth_dev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_queues = (uint16_t)internals->nb_queues;
 	dev_info->max_tx_queues = (uint16_t)internals->nb_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	return 0;
 }
diff --git a/drivers/net/af_xdp/rte_eth_af_xdp.c b/drivers/net/af_xdp/rte_eth_af_xdp.c
index 74ffa4511284..dbf745852716 100644
--- a/drivers/net/af_xdp/rte_eth_af_xdp.c
+++ b/drivers/net/af_xdp/rte_eth_af_xdp.c
@@ -163,10 +163,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 /* List which tracks PMDs to facilitate sharing UMEMs across them. */
@@ -654,7 +654,7 @@ eth_af_xdp_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -663,7 +663,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
diff --git a/drivers/net/ark/ark_ethdev.c b/drivers/net/ark/ark_ethdev.c
index 377299b14c7a..b618cba3f023 100644
--- a/drivers/net/ark/ark_ethdev.c
+++ b/drivers/net/ark/ark_ethdev.c
@@ -736,14 +736,14 @@ eth_ark_dev_info_get(struct rte_eth_dev *dev,
 		.nb_align = ARK_TX_MIN_QUEUE}; /* power of 2 */
 
 	/* ARK PMD supports all line rates, how do we indicate that here ?? */
-	dev_info->speed_capa = (ETH_LINK_SPEED_1G |
-				ETH_LINK_SPEED_10G |
-				ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G);
-
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_TIMESTAMP;
+	dev_info->speed_capa = (RTE_ETH_LINK_SPEED_1G |
+				RTE_ETH_LINK_SPEED_10G |
+				RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G);
+
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return 0;
 }
diff --git a/drivers/net/atlantic/atl_ethdev.c b/drivers/net/atlantic/atl_ethdev.c
index 0ce35eb519e2..5af1cff3770e 100644
--- a/drivers/net/atlantic/atl_ethdev.c
+++ b/drivers/net/atlantic/atl_ethdev.c
@@ -154,21 +154,21 @@ static struct rte_pci_driver rte_atl_pmd = {
 	.remove = eth_atl_pci_remove,
 };
 
-#define ATL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP \
-			| DEV_RX_OFFLOAD_IPV4_CKSUM \
-			| DEV_RX_OFFLOAD_UDP_CKSUM \
-			| DEV_RX_OFFLOAD_TCP_CKSUM \
-			| DEV_RX_OFFLOAD_JUMBO_FRAME \
-			| DEV_RX_OFFLOAD_MACSEC_STRIP \
-			| DEV_RX_OFFLOAD_VLAN_FILTER)
-
-#define ATL_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT \
-			| DEV_TX_OFFLOAD_IPV4_CKSUM \
-			| DEV_TX_OFFLOAD_UDP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_CKSUM \
-			| DEV_TX_OFFLOAD_TCP_TSO \
-			| DEV_TX_OFFLOAD_MACSEC_INSERT \
-			| DEV_TX_OFFLOAD_MULTI_SEGS)
+#define ATL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP \
+			| RTE_ETH_RX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_RX_OFFLOAD_JUMBO_FRAME \
+			| RTE_ETH_RX_OFFLOAD_MACSEC_STRIP \
+			| RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+
+#define ATL_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT \
+			| RTE_ETH_TX_OFFLOAD_IPV4_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_UDP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_CKSUM \
+			| RTE_ETH_TX_OFFLOAD_TCP_TSO \
+			| RTE_ETH_TX_OFFLOAD_MACSEC_INSERT \
+			| RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define SFP_EEPROM_SIZE 0x100
 
@@ -489,7 +489,7 @@ atl_dev_start(struct rte_eth_dev *dev)
 	/* set adapter started */
 	hw->adapter_stopped = 0;
 
-	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR,
 		"Invalid link_speeds for port %u, fix speed not supported",
 				dev->data->port_id);
@@ -656,18 +656,18 @@ atl_dev_set_link_up(struct rte_eth_dev *dev)
 	uint32_t link_speeds = dev->data->dev_conf.link_speeds;
 	uint32_t speed_mask = 0;
 
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed_mask = hw->aq_nic_cfg->link_speed_msk;
 	} else {
-		if (link_speeds & ETH_LINK_SPEED_10G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed_mask |= AQ_NIC_RATE_10G;
-		if (link_speeds & ETH_LINK_SPEED_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed_mask |= AQ_NIC_RATE_5G;
-		if (link_speeds & ETH_LINK_SPEED_1G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed_mask |= AQ_NIC_RATE_1G;
-		if (link_speeds & ETH_LINK_SPEED_2_5G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed_mask |=  AQ_NIC_RATE_2G5;
-		if (link_speeds & ETH_LINK_SPEED_100M)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed_mask |= AQ_NIC_RATE_100M;
 	}
 
@@ -1128,10 +1128,10 @@ atl_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = HW_ATL_B0_RSS_REDIRECTION_MAX;
 	dev_info->flow_type_rss_offloads = ATL_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
-	dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 
 	return 0;
 }
@@ -1176,10 +1176,10 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 	u32 fc = AQ_NIC_FC_OFF;
 	int err = 0;
 
-	link.link_status = ETH_LINK_DOWN;
+	link.link_status = RTE_ETH_LINK_DOWN;
 	link.link_speed = 0;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = hw->is_autoneg ? ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = hw->is_autoneg ? RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 	memset(&old, 0, sizeof(old));
 
 	/* load old link status */
@@ -1199,8 +1199,8 @@ atl_dev_link_update(struct rte_eth_dev *dev, int wait __rte_unused)
 		return 0;
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = hw->aq_link_status.mbps;
 
 	rte_eth_linkstatus_set(dev, &link);
@@ -1334,7 +1334,7 @@ atl_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -1533,13 +1533,13 @@ atl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	hw->aq_fw_ops->get_flow_control(hw, &fc);
 
 	if (fc == AQ_NIC_FC_OFF)
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	else if ((fc & AQ_NIC_FC_RX) && (fc & AQ_NIC_FC_TX))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (fc & AQ_NIC_FC_RX)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (fc & AQ_NIC_FC_TX)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 
 	return 0;
 }
@@ -1554,13 +1554,13 @@ atl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	if (hw->aq_fw_ops->set_flow_control == NULL)
 		return -ENOTSUP;
 
-	if (fc_conf->mode == RTE_FC_NONE)
+	if (fc_conf->mode == RTE_ETH_FC_NONE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_OFF;
-	else if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_RX;
-	else if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		hw->aq_nic_cfg->flow_control = AQ_NIC_FC_TX;
-	else if (fc_conf->mode == RTE_FC_FULL)
+	else if (fc_conf->mode == RTE_ETH_FC_FULL)
 		hw->aq_nic_cfg->flow_control = (AQ_NIC_FC_RX | AQ_NIC_FC_TX);
 
 	if (old_flow_control != hw->aq_nic_cfg->flow_control)
@@ -1731,14 +1731,14 @@ atl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	ret = atl_enable_vlan_filter(dev, mask & ETH_VLAN_FILTER_MASK);
+	ret = atl_enable_vlan_filter(dev, mask & RTE_ETH_VLAN_FILTER_MASK);
 
-	cfg->vlan_strip = !!(mask & ETH_VLAN_STRIP_MASK);
+	cfg->vlan_strip = !!(mask & RTE_ETH_VLAN_STRIP_MASK);
 
 	for (i = 0; i < dev->data->nb_rx_queues; i++)
 		hw_atl_rpo_rx_desc_vlan_stripping_set(hw, cfg->vlan_strip, i);
 
-	if (mask & ETH_VLAN_EXTEND_MASK)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK)
 		ret = -ENOTSUP;
 
 	return ret;
@@ -1754,10 +1754,10 @@ atl_vlan_tpid_set(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 	PMD_INIT_FUNC_TRACE();
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		hw_atl_rpf_vlan_inner_etht_set(hw, tpid);
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		hw_atl_rpf_vlan_outer_etht_set(hw, tpid);
 		break;
 	default:
diff --git a/drivers/net/atlantic/atl_ethdev.h b/drivers/net/atlantic/atl_ethdev.h
index f547571b5c97..da993be35faa 100644
--- a/drivers/net/atlantic/atl_ethdev.h
+++ b/drivers/net/atlantic/atl_ethdev.h
@@ -11,15 +11,15 @@
 #include "hw_atl/hw_atl_utils.h"
 
 #define ATL_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define ATL_DEV_PRIVATE_TO_HW(adapter) \
 	(&((struct atl_adapter *)adapter)->hw)
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index 7d367c9306ec..ddf110d6ce7e 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -145,10 +145,10 @@ atl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t rx_queue_id,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 
 	rxq->l3_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_IPV4_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->l4_csum_enabled = dev->data->dev_conf.rxmode.offloads &
-		(DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		(RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		PMD_DRV_LOG(ERR, "PMD does not support KEEP_CRC offload");
 
 	/* allocate memory for the software ring */
diff --git a/drivers/net/avp/avp_ethdev.c b/drivers/net/avp/avp_ethdev.c
index 623fa5e5ff5b..e870ced7e992 100644
--- a/drivers/net/avp/avp_ethdev.c
+++ b/drivers/net/avp/avp_ethdev.c
@@ -2011,9 +2011,9 @@ avp_dev_configure(struct rte_eth_dev *eth_dev)
 	/* Setup required number of queues */
 	_avp_set_queue_counts(eth_dev);
 
-	mask = (ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+	mask = (RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 	ret = avp_vlan_offload_set(eth_dev, mask);
 	if (ret < 0) {
 		PMD_DRV_LOG(ERR, "VLAN offload set failed by host, ret=%d\n",
@@ -2153,8 +2153,8 @@ avp_dev_link_update(struct rte_eth_dev *eth_dev,
 	struct avp_dev *avp = AVP_DEV_PRIVATE_TO_HW(eth_dev->data->dev_private);
 	struct rte_eth_link *link = &eth_dev->data->dev_link;
 
-	link->link_speed = ETH_SPEED_NUM_10G;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_speed = RTE_ETH_SPEED_NUM_10G;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_status = !!(avp->flags & AVP_F_LINKUP);
 
 	return -1;
@@ -2204,8 +2204,8 @@ avp_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->max_rx_pktlen = avp->max_rx_pkt_len;
 	dev_info->max_mac_addrs = AVP_MAX_MAC_ADDRS;
 	if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	}
 
 	return 0;
@@ -2218,9 +2218,9 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct rte_eth_conf *dev_conf = &eth_dev->data->dev_conf;
 	uint64_t offloads = dev_conf->rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		if (avp->host_features & RTE_AVP_FEATURE_VLAN_OFFLOAD) {
-			if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+			if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 				avp->features |= RTE_AVP_FEATURE_VLAN_OFFLOAD;
 			else
 				avp->features &= ~RTE_AVP_FEATURE_VLAN_OFFLOAD;
@@ -2229,13 +2229,13 @@ avp_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			PMD_DRV_LOG(ERR, "VLAN filter offload not supported\n");
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(ERR, "VLAN extend offload not supported\n");
 	}
 
diff --git a/drivers/net/axgbe/axgbe_dev.c b/drivers/net/axgbe/axgbe_dev.c
index 786288a7b079..c0f033e06b15 100644
--- a/drivers/net/axgbe/axgbe_dev.c
+++ b/drivers/net/axgbe/axgbe_dev.c
@@ -840,11 +840,11 @@ static void axgbe_rss_options(struct axgbe_port *pdata)
 	pdata->rss_hf = rss_conf->rss_hf;
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
-	if (rss_hf & (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 }
 
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 9cb4818af11f..f33b9245bcf9 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -326,7 +326,7 @@ axgbe_dev_configure(struct rte_eth_dev *dev)
 	struct axgbe_port *pdata =  dev->data->dev_private;
 	/* Checksum offload to hardware */
 	pdata->rx_csum_enable = dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_CHECKSUM;
+				RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return 0;
 }
 
@@ -335,9 +335,9 @@ axgbe_dev_rx_mq_config(struct rte_eth_dev *dev)
 {
 	struct axgbe_port *pdata = dev->data->dev_private;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		pdata->rss_enable = 1;
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		pdata->rss_enable = 0;
 	else
 		return  -1;
@@ -383,7 +383,7 @@ axgbe_dev_start(struct rte_eth_dev *dev)
 
 	rte_bit_relaxed_clear32(AXGBE_STOPPED, &pdata->dev_state);
 	rte_bit_relaxed_clear32(AXGBE_DOWN, &pdata->dev_state);
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 				max_pkt_len > pdata->rx_buf_size)
 		dev_data->scattered_rx = 1;
 
@@ -519,8 +519,8 @@ axgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		pdata->rss_table[i] = reta_conf[idx].reta[shift];
@@ -550,8 +550,8 @@ axgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) == 0)
 			continue;
 		reta_conf[idx].reta[shift] = pdata->rss_table[i];
@@ -588,13 +588,13 @@ axgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 
 	pdata->rss_hf = rss_conf->rss_hf & AXGBE_RSS_OFFLOAD;
 
-	if (pdata->rss_hf & (ETH_RSS_IPV4 | ETH_RSS_IPV6))
+	if (pdata->rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, IP2TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, TCP4TE, 1);
 	if (pdata->rss_hf &
-	    (ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP))
+	    (RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP))
 		AXGMAC_SET_BITS(pdata->rss_options, MAC_RSSCR, UDP4TE, 1);
 
 	/* Set the RSS options */
@@ -763,7 +763,7 @@ axgbe_dev_link_update(struct rte_eth_dev *dev,
 	link.link_status = pdata->phy_link;
 	link.link_speed = pdata->phy_speed;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
 		PMD_DRV_LOG(ERR, "No change in link status\n");
@@ -1206,25 +1206,25 @@ axgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_rx_pktlen = AXGBE_RX_MAX_BUF_SIZE;
 	dev_info->max_mac_addrs = pdata->hw_feat.addn_mac + 1;
 	dev_info->max_hash_mac_addrs = pdata->hw_feat.hash_table_size;
-	dev_info->speed_capa =  ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM  |
-		DEV_RX_OFFLOAD_JUMBO_FRAME	|
-		DEV_RX_OFFLOAD_SCATTER	  |
-		DEV_RX_OFFLOAD_KEEP_CRC;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME	|
+		RTE_ETH_RX_OFFLOAD_SCATTER	  |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (pdata->hw_feat.rss) {
 		dev_info->flow_type_rss_offloads = AXGBE_RSS_OFFLOAD;
@@ -1261,13 +1261,13 @@ axgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	fc.autoneg = pdata->pause_autoneg;
 
 	if (pdata->rx_pause && pdata->tx_pause)
-		fc.mode = RTE_FC_FULL;
+		fc.mode = RTE_ETH_FC_FULL;
 	else if (pdata->rx_pause)
-		fc.mode = RTE_FC_RX_PAUSE;
+		fc.mode = RTE_ETH_FC_RX_PAUSE;
 	else if (pdata->tx_pause)
-		fc.mode = RTE_FC_TX_PAUSE;
+		fc.mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc.mode = RTE_FC_NONE;
+		fc.mode = RTE_ETH_FC_NONE;
 
 	fc_conf->high_water =  (1024 + (fc.low_water[0] << 9)) / 1024;
 	fc_conf->low_water =  (1024 + (fc.high_water[0] << 9)) / 1024;
@@ -1297,13 +1297,13 @@ axgbe_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	AXGMAC_IOWRITE(pdata, reg, reg_val);
 	fc.mode = fc_conf->mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 	} else {
@@ -1385,15 +1385,15 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	fc.mode = pfc_conf->fc.mode;
 
-	if (fc.mode == RTE_FC_FULL) {
+	if (fc.mode == RTE_ETH_FC_FULL) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_RX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_RX_PAUSE) {
 		pdata->tx_pause = 0;
 		pdata->rx_pause = 1;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 1);
-	} else if (fc.mode == RTE_FC_TX_PAUSE) {
+	} else if (fc.mode == RTE_ETH_FC_TX_PAUSE) {
 		pdata->tx_pause = 1;
 		pdata->rx_pause = 0;
 		AXGMAC_IOWRITE_BITS(pdata, MAC_RFCR, PFCE, 0);
@@ -1492,11 +1492,11 @@ static int axgb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	}
 	if (frame_size > AXGBE_ETH_MAX_LEN) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		val = 1;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		val = 0;
 	}
 	AXGMAC_IOWRITE_BITS(pdata, MAC_RCR, JE, val);
@@ -1842,8 +1842,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	PMD_DRV_LOG(DEBUG, "EDVLP: qinq = 0x%x\n", qinq);
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_INNER\n");
+	case RTE_ETH_VLAN_TYPE_INNER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_INNER\n");
 		if (qinq) {
 			if (tpid != 0x8100 && tpid != 0x88a8)
 				PMD_DRV_LOG(ERR,
@@ -1860,8 +1860,8 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    "Inner type not supported in single tag\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
-		PMD_DRV_LOG(DEBUG, "ETH_VLAN_TYPE_OUTER\n");
+	case RTE_ETH_VLAN_TYPE_OUTER:
+		PMD_DRV_LOG(DEBUG, "RTE_ETH_VLAN_TYPE_OUTER\n");
 		if (qinq) {
 			PMD_DRV_LOG(DEBUG, "double tagging is enabled\n");
 			/*Enable outer VLAN tag*/
@@ -1878,11 +1878,11 @@ axgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 					    "tag supported 0x8100/0x88A8\n");
 		}
 		break;
-	case ETH_VLAN_TYPE_MAX:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_MAX\n");
+	case RTE_ETH_VLAN_TYPE_MAX:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_MAX\n");
 		break;
-	case ETH_VLAN_TYPE_UNKNOWN:
-		PMD_DRV_LOG(ERR, "ETH_VLAN_TYPE_UNKNOWN\n");
+	case RTE_ETH_VLAN_TYPE_UNKNOWN:
+		PMD_DRV_LOG(ERR, "RTE_ETH_VLAN_TYPE_UNKNOWN\n");
 		break;
 	}
 	return 0;
@@ -1916,8 +1916,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, CSVL, 0);
 	AXGMAC_IOWRITE_BITS(pdata, MAC_VLANIR, VLTI, 1);
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			PMD_DRV_LOG(DEBUG, "Strip ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_stripping(pdata);
@@ -1927,8 +1927,8 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_stripping(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			PMD_DRV_LOG(DEBUG, "Filter ON for device = %s\n",
 				    pdata->eth_dev->device->name);
 			pdata->hw_if.enable_rx_vlan_filtering(pdata);
@@ -1938,14 +1938,14 @@ axgbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			pdata->hw_if.disable_rx_vlan_filtering(pdata);
 		}
 	}
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			PMD_DRV_LOG(DEBUG, "enabling vlan extended mode\n");
 			axgbe_vlan_extend_enable(pdata);
 			/* Set global registers with default ethertype*/
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					    RTE_ETHER_TYPE_VLAN);
-			axgbe_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			axgbe_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					    RTE_ETHER_TYPE_VLAN);
 		} else {
 			PMD_DRV_LOG(DEBUG, "disabling vlan extended mode\n");
diff --git a/drivers/net/axgbe/axgbe_ethdev.h b/drivers/net/axgbe/axgbe_ethdev.h
index a6226729fe4d..0a3e1c59df1a 100644
--- a/drivers/net/axgbe/axgbe_ethdev.h
+++ b/drivers/net/axgbe/axgbe_ethdev.h
@@ -97,12 +97,12 @@
 
 /* Receive Side Scaling */
 #define AXGBE_RSS_OFFLOAD  ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define AXGBE_RSS_HASH_KEY_SIZE		40
 #define AXGBE_RSS_MAX_TABLE_SIZE	256
diff --git a/drivers/net/axgbe/axgbe_mdio.c b/drivers/net/axgbe/axgbe_mdio.c
index 4f98e695ae74..59fa9175aded 100644
--- a/drivers/net/axgbe/axgbe_mdio.c
+++ b/drivers/net/axgbe/axgbe_mdio.c
@@ -597,7 +597,7 @@ static void axgbe_an73_state_machine(struct axgbe_port *pdata)
 		pdata->an_int = 0;
 		axgbe_an73_clear_interrupts(pdata);
 		pdata->eth_dev->data->dev_link.link_status =
-			ETH_LINK_DOWN;
+			RTE_ETH_LINK_DOWN;
 	} else if (pdata->an_state == AXGBE_AN_ERROR) {
 		PMD_DRV_LOG(ERR, "error during auto-negotiation, state=%u\n",
 			    cur_state);
diff --git a/drivers/net/axgbe/axgbe_rxtx.c b/drivers/net/axgbe/axgbe_rxtx.c
index 33f709a6bb02..baa17a5fb43f 100644
--- a/drivers/net/axgbe/axgbe_rxtx.c
+++ b/drivers/net/axgbe/axgbe_rxtx.c
@@ -75,7 +75,7 @@ int axgbe_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 		(DMA_CH_INC * rxq->queue_id));
 	rxq->dma_tail_reg = (volatile uint32_t *)((uint8_t *)rxq->dma_regs +
 						  DMA_CH_RDTR_LO);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -286,7 +286,7 @@ axgbe_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
@@ -430,7 +430,7 @@ uint16_t eth_axgbe_recv_scattered_pkts(void *rx_queue,
 				mbuf->vlan_tci =
 					AXGMAC_GET_BITS_LE(desc->write.desc0,
 							RX_NORMAL_DESC0, OVT);
-				if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+				if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 					mbuf->ol_flags |= PKT_RX_VLAN_STRIPPED;
 				else
 					mbuf->ol_flags &= ~PKT_RX_VLAN_STRIPPED;
diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c
index 463886f17a58..14d91f868cd8 100644
--- a/drivers/net/bnx2x/bnx2x_ethdev.c
+++ b/drivers/net/bnx2x/bnx2x_ethdev.c
@@ -94,14 +94,14 @@ bnx2x_link_update(struct rte_eth_dev *dev)
 	link.link_speed = sc->link_vars.line_speed;
 	switch (sc->link_vars.duplex) {
 		case DUPLEX_FULL:
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			break;
 		case DUPLEX_HALF:
-			link.link_duplex = ETH_LINK_HALF_DUPLEX;
+			link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 			break;
 	}
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+		 RTE_ETH_LINK_SPEED_FIXED);
 	link.link_status = sc->link_vars.link_up;
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -181,7 +181,7 @@ bnx2x_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE(sc);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		sc->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len;
 		dev->data->mtu = sc->mtu;
 	}
@@ -412,7 +412,7 @@ bnx2xvf_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_comple
 	if (sc->old_bulletin.valid_bitmap & (1 << CHANNEL_DOWN)) {
 		PMD_DRV_LOG(ERR, sc, "PF indicated channel is down."
 				"VF device is no longer operational");
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	}
 
 	return ret;
@@ -538,8 +538,8 @@ bnx2x_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = BNX2X_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = BNX2X_MAX_RX_PKT_LEN;
 	dev_info->max_mac_addrs  = BNX2X_MAX_MAC_ADDRS;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev_info->rx_desc_lim.nb_max = MAX_RX_AVAIL;
 	dev_info->rx_desc_lim.nb_min = MIN_RX_SIZE_NONTPA;
@@ -675,7 +675,7 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf)
 	bnx2x_load_firmware(sc);
 	assert(sc->firmware);
 
-	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		sc->udp_rss = 1;
 
 	sc->rx_budget = BNX2X_RX_BUDGET;
diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h
index 494a1eff3700..7e313c2fb5af 100644
--- a/drivers/net/bnxt/bnxt.h
+++ b/drivers/net/bnxt/bnxt.h
@@ -569,40 +569,40 @@ struct bnxt_rep_info {
 #define BNXT_FW_STATUS_SHUTDOWN		0x100000
 
 #define BNXT_ETH_RSS_SUPPORT (	\
-	ETH_RSS_IPV4 |		\
-	ETH_RSS_NONFRAG_IPV4_TCP |	\
-	ETH_RSS_NONFRAG_IPV4_UDP |	\
-	ETH_RSS_IPV6 |		\
-	ETH_RSS_NONFRAG_IPV6_TCP |	\
-	ETH_RSS_NONFRAG_IPV6_UDP |	\
-	ETH_RSS_LEVEL_MASK)
-
-#define BNXT_DEV_TX_OFFLOAD_SUPPORT (DEV_TX_OFFLOAD_VLAN_INSERT | \
-				     DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_CKSUM | \
-				     DEV_TX_OFFLOAD_UDP_CKSUM | \
-				     DEV_TX_OFFLOAD_TCP_TSO | \
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_IPIP_TNL_TSO | \
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO | \
-				     DEV_TX_OFFLOAD_QINQ_INSERT | \
-				     DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define BNXT_DEV_RX_OFFLOAD_SUPPORT (DEV_RX_OFFLOAD_VLAN_FILTER | \
-				     DEV_RX_OFFLOAD_VLAN_STRIP | \
-				     DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_TCP_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-				     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | \
-				     DEV_RX_OFFLOAD_JUMBO_FRAME | \
-				     DEV_RX_OFFLOAD_KEEP_CRC | \
-				     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-				     DEV_RX_OFFLOAD_TCP_LRO | \
-				     DEV_RX_OFFLOAD_SCATTER | \
-				     DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RSS_IPV4 |		\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |	\
+	RTE_ETH_RSS_IPV6 |		\
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP |	\
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP |	\
+	RTE_ETH_RSS_LEVEL_MASK)
+
+#define BNXT_DEV_TX_OFFLOAD_SUPPORT (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+				     RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
+				     RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define BNXT_DEV_RX_OFFLOAD_SUPPORT (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+				     RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+				     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
+				     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC | \
+				     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO | \
+				     RTE_ETH_RX_OFFLOAD_SCATTER | \
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define BNXT_HWRM_SHORT_REQ_LEN		sizeof(struct hwrm_short_input)
 
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index de34a2f0bb2d..3f3596f39f2f 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -426,7 +426,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 		goto err_out;
 
 	/* Alloc RSS context only if RSS mode is enabled */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int j, nr_ctxs = bnxt_rss_ctxts(bp);
 
 		/* RSS table size in Thor is 512.
@@ -458,7 +458,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	 * setting is not available at this time, it will not be
 	 * configured correctly in the CFA.
 	 */
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -493,7 +493,7 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 	bnxt_hwrm_vnic_plcmode_cfg(bp, vnic);
 
 	rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic,
-				    (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) ?
+				    (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ?
 				    true : false);
 	if (rc)
 		goto err_out;
@@ -738,11 +738,11 @@ static int bnxt_start_nic(struct bnxt *bp)
 
 	if (bp->eth_dev->data->mtu > RTE_ETHER_MTU) {
 		bp->eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		bp->flags |= BNXT_FLAG_JUMBO;
 	} else {
 		bp->eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		bp->flags &= ~BNXT_FLAG_JUMBO;
 	}
 
@@ -908,35 +908,35 @@ uint32_t bnxt_get_speed_capabilities(struct bnxt *bp)
 		link_speed = bp->link_info->support_pam4_speeds;
 
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100MBHD)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_1GB)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_2_5GB)
-		speed_capa |= ETH_LINK_SPEED_2_5G;
+		speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_10GB)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_20GB)
-		speed_capa |= ETH_LINK_SPEED_20G;
+		speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_25GB)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_40GB)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_50GB)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS_100GB)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_50G)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_100G)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (link_speed & HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_PAM4_SPEEDS_200G)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	if (bp->link_info->auto_mode ==
 	    HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -980,8 +980,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT |
 				    dev_info->tx_queue_offload_capa;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
@@ -1030,8 +1030,8 @@ static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	 */
 
 	/* VMDq resources */
-	vpool = 64; /* ETH_64_POOLS */
-	vrxq = 128; /* ETH_VMDQ_DCB_NUM_QUEUES */
+	vpool = 64; /* RTE_ETH_64_POOLS */
+	vrxq = 128; /* RTE_ETH_VMDQ_DCB_NUM_QUEUES */
 	for (i = 0; i < 4; vpool >>= 1, i++) {
 		if (max_vnics > vpool) {
 			for (j = 0; j < 5; vrxq >>= 1, j++) {
@@ -1126,18 +1126,18 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev)
 	    (uint32_t)(eth_dev->data->nb_rx_queues) > bp->max_ring_grps)
 		goto resource_error;
 
-	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) &&
+	if (!(eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) &&
 	    bp->max_vnics < eth_dev->data->nb_rx_queues)
 		goto resource_error;
 
 	bp->rx_cp_nr_rings = bp->rx_nr_rings;
 	bp->tx_cp_nr_rings = bp->tx_nr_rings;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rx_offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rx_offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	eth_dev->data->dev_conf.rxmode.offloads = rx_offloads;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		eth_dev->data->mtu =
 			eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
 			RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE *
@@ -1168,7 +1168,7 @@ void bnxt_print_link_info(struct rte_eth_dev *eth_dev)
 		PMD_DRV_LOG(INFO, "Port %d Link Up - speed %u Mbps - %s\n",
 			eth_dev->data->port_id,
 			(uint32_t)link->link_speed,
-			(link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			("full-duplex") : ("half-duplex\n"));
 	else
 		PMD_DRV_LOG(INFO, "Port %d Link Down\n",
@@ -1184,10 +1184,10 @@ static int bnxt_scattered_rx(struct rte_eth_dev *eth_dev)
 	uint16_t buf_size;
 	int i;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return 1;
 
-	if (eth_dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (eth_dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		return 1;
 
 	for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1232,16 +1232,16 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev)
 	 * a limited subset have been enabled.
 	 */
 	if (eth_dev->data->dev_conf.rxmode.offloads &
-		~(DEV_RX_OFFLOAD_VLAN_STRIP |
-		  DEV_RX_OFFLOAD_KEEP_CRC |
-		  DEV_RX_OFFLOAD_JUMBO_FRAME |
-		  DEV_RX_OFFLOAD_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_TCP_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		  DEV_RX_OFFLOAD_RSS_HASH |
-		  DEV_RX_OFFLOAD_VLAN_FILTER))
+		~(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		  RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		  RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		  RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		  RTE_ETH_RX_OFFLOAD_VLAN_FILTER))
 		goto use_scalar_rx;
 
 #if defined(RTE_ARCH_X86) && defined(CC_AVX2_SUPPORT)
@@ -1293,7 +1293,7 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev)
 	 * or tx offloads.
 	 */
 	if (eth_dev->data->scattered_rx ||
-	    (offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) ||
+	    (offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) ||
 	    BNXT_TRUFLOW_EN(bp))
 		goto use_scalar_tx;
 
@@ -1594,10 +1594,10 @@ static int bnxt_dev_start_op(struct rte_eth_dev *eth_dev)
 
 	bnxt_link_update_op(eth_dev, 1);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		vlan_mask |= ETH_VLAN_FILTER_MASK;
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		vlan_mask |= ETH_VLAN_STRIP_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		vlan_mask |= RTE_ETH_VLAN_FILTER_MASK;
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		vlan_mask |= RTE_ETH_VLAN_STRIP_MASK;
 	rc = bnxt_vlan_offload_set_op(eth_dev, vlan_mask);
 	if (rc)
 		goto error;
@@ -1819,8 +1819,8 @@ int bnxt_link_update_op(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		/* Retrieve link info from hardware */
 		rc = bnxt_get_hwrm_link_config(bp, &new);
 		if (rc) {
-			new.link_speed = ETH_LINK_SPEED_100M;
-			new.link_duplex = ETH_LINK_FULL_DUPLEX;
+			new.link_speed = RTE_ETH_LINK_SPEED_100M;
+			new.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR,
 				"Failed to retrieve link rc = 0x%x!\n", rc);
 			goto out;
@@ -2014,7 +2014,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	if (!vnic->rss_table)
 		return -EINVAL;
 
-	if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return -EINVAL;
 
 	if (reta_size != tbl_size) {
@@ -2027,8 +2027,8 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev,
 	for (i = 0; i < reta_size; i++) {
 		struct bnxt_rx_queue *rxq;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (!(reta_conf[idx].mask & (1ULL << sft)))
 			continue;
@@ -2081,8 +2081,8 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev,
 	}
 
 	for (idx = 0, i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		sft = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		sft = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].mask & (1ULL << sft)) {
 			uint16_t qid;
@@ -2120,7 +2120,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	 * If RSS enablement were different than dev_configure,
 	 * then return -EINVAL
 	 */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (!rss_conf->rss_hf)
 			PMD_DRV_LOG(ERR, "Hash type NONE\n");
 	} else {
@@ -2138,7 +2138,7 @@ static int bnxt_rss_hash_update_op(struct rte_eth_dev *eth_dev,
 	vnic->hash_type = bnxt_rte_to_hwrm_hash_types(rss_conf->rss_hf);
 	vnic->hash_mode =
 		bnxt_rte_to_hwrm_hash_level(bp, rss_conf->rss_hf,
-					    ETH_RSS_LEVEL(rss_conf->rss_hf));
+					    RTE_ETH_RSS_LEVEL(rss_conf->rss_hf));
 
 	/*
 	 * If hashkey is not specified, use the previously configured
@@ -2183,30 +2183,30 @@ static int bnxt_rss_hash_conf_get_op(struct rte_eth_dev *eth_dev,
 		hash_types = vnic->hash_type;
 		rss_conf->rss_hf = 0;
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_IPV4;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_IPV6;
+			rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 			hash_types &= ~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
 		}
 		if (hash_types & HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6) {
-			rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+			rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 			hash_types &=
 				~HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 		}
@@ -2246,17 +2246,17 @@ static int bnxt_flow_ctrl_get_op(struct rte_eth_dev *dev,
 		fc_conf->autoneg = 1;
 	switch (bp->link_info->pause) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case (HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_TX |
 			HWRM_PORT_PHY_QCFG_OUTPUT_PAUSE_RX):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	}
 	return 0;
@@ -2279,11 +2279,11 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		bp->link_info->auto_pause = 0;
 		bp->link_info->force_pause = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_RX;
@@ -2294,7 +2294,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_RX;
 		}
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX;
@@ -2305,7 +2305,7 @@ static int bnxt_flow_ctrl_set_op(struct rte_eth_dev *dev,
 					HWRM_PORT_PHY_CFG_INPUT_FORCE_PAUSE_TX;
 		}
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		if (fc_conf->autoneg) {
 			bp->link_info->auto_pause =
 					HWRM_PORT_PHY_CFG_INPUT_AUTO_PAUSE_TX |
@@ -2336,7 +2336,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2351,7 +2351,7 @@ bnxt_udp_tunnel_port_add_op(struct rte_eth_dev *eth_dev,
 			HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN;
 		bp->vxlan_port_cnt++;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "Tunnel Port %d already programmed\n",
 				udp_tunnel->udp_port);
@@ -2389,7 +2389,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 		return rc;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (!bp->vxlan_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2406,7 +2406,7 @@ bnxt_udp_tunnel_port_del_op(struct rte_eth_dev *eth_dev,
 			HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN;
 		port = bp->vxlan_fw_dst_port_id;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (!bp->geneve_port_cnt) {
 			PMD_DRV_LOG(ERR, "No Tunnel port configured yet\n");
 			return -EINVAL;
@@ -2584,7 +2584,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 	int rc;
 
 	vnic = BNXT_GET_DEFAULT_VNIC(bp);
-	if (!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		/* Remove any VLAN filters programmed */
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
@@ -2604,7 +2604,7 @@ bnxt_config_vlan_hw_filter(struct bnxt *bp, uint64_t rx_offloads)
 		bnxt_add_vlan_filter(bp, 0);
 	}
 	PMD_DRV_LOG(DEBUG, "VLAN Filtering: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER));
 
 	return 0;
 }
@@ -2617,7 +2617,7 @@ static int bnxt_free_one_vnic(struct bnxt *bp, uint16_t vnic_id)
 
 	/* Destroy vnic filters and vnic */
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		for (i = 0; i < RTE_ETHER_MAX_VLAN_ID; i++)
 			bnxt_del_vlan_filter(bp, i);
 	}
@@ -2656,7 +2656,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	if (bp->eth_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_VLAN_FILTER) {
+	    RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		rc = bnxt_add_vlan_filter(bp, 0);
 		if (rc)
 			return rc;
@@ -2674,7 +2674,7 @@ bnxt_config_vlan_hw_stripping(struct bnxt *bp, uint64_t rx_offloads)
 		return rc;
 
 	PMD_DRV_LOG(DEBUG, "VLAN Strip Offload: %d\n",
-		    !!(rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP));
+		    !!(rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP));
 
 	return rc;
 }
@@ -2694,22 +2694,22 @@ bnxt_vlan_offload_set_op(struct rte_eth_dev *dev, int mask)
 	if (!dev->data->dev_started)
 		return 0;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering */
 		rc = bnxt_config_vlan_hw_filter(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
 		rc = bnxt_config_vlan_hw_stripping(bp, rx_offloads);
 		if (rc)
 			return rc;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			PMD_DRV_LOG(DEBUG, "Extend VLAN supported\n");
 		else
 			PMD_DRV_LOG(INFO, "Extend VLAN unsupported\n");
@@ -2724,10 +2724,10 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 {
 	struct bnxt *bp = dev->data->dev_private;
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
-	if (vlan_type != ETH_VLAN_TYPE_INNER &&
-	    vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	    vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -2739,7 +2739,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		return -EINVAL;
 	}
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		switch (tpid) {
 		case RTE_ETHER_TYPE_QINQ:
 			bp->outer_tpid_bd =
@@ -2767,7 +2767,7 @@ bnxt_vlan_tpid_set_op(struct rte_eth_dev *dev, enum rte_vlan_type vlan_type,
 		}
 		bp->outer_tpid_bd |= tpid;
 		PMD_DRV_LOG(INFO, "outer_tpid_bd = %x\n", bp->outer_tpid_bd);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer vlan in QinQ\n");
 		return -EINVAL;
@@ -2807,7 +2807,7 @@ bnxt_set_default_mac_addr_op(struct rte_eth_dev *dev,
 	bnxt_del_dflt_mac_filter(bp, vnic);
 
 	memcpy(bp->mac_addr, addr, RTE_ETHER_ADDR_LEN);
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		/* This filter will allow only untagged packets */
 		rc = bnxt_add_vlan_filter(bp, 0);
 	} else {
@@ -3029,10 +3029,10 @@ int bnxt_mtu_set_op(struct rte_eth_dev *eth_dev, uint16_t new_mtu)
 	if (new_mtu > RTE_ETHER_MTU) {
 		bp->flags |= BNXT_FLAG_JUMBO;
 		bp->eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	} else {
 		bp->eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		bp->flags &= ~BNXT_FLAG_JUMBO;
 	}
 
diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c
index 59489b591a6f..98e1107f629c 100644
--- a/drivers/net/bnxt/bnxt_flow.c
+++ b/drivers/net/bnxt/bnxt_flow.c
@@ -974,7 +974,7 @@ static int bnxt_vnic_prep(struct bnxt *bp, struct bnxt_vnic_info *vnic,
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		vnic->vlan_strip = true;
 	else
 		vnic->vlan_strip = false;
@@ -1157,7 +1157,7 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev,
 
 		rxq = bp->rx_queues[act_q->index];
 
-		if (!(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS) && rxq &&
+		if (!(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) && rxq &&
 		    vnic->fw_vnic_id != INVALID_HW_RING_ID)
 			goto use_vnic;
 
diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c
index f29d57423585..0d9dda0c362c 100644
--- a/drivers/net/bnxt/bnxt_hwrm.c
+++ b/drivers/net/bnxt/bnxt_hwrm.c
@@ -628,7 +628,7 @@ int bnxt_hwrm_set_l2_filter(struct bnxt *bp,
 	uint16_t j = dst_id - 1;
 
 	//TODO: Is there a better way to add VLANs to each VNIC in case of VMDQ
-	if ((dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG) &&
+	if ((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) &&
 	    conf->pool_map[j].pools & (1UL << j)) {
 		PMD_DRV_LOG(DEBUG,
 			"Add vlan %u to vmdq pool %u\n",
@@ -2955,12 +2955,12 @@ static uint16_t bnxt_parse_eth_link_duplex(uint32_t conf_link_speed)
 {
 	uint8_t hw_link_duplex = HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
-	if ((conf_link_speed & ETH_LINK_SPEED_FIXED) == ETH_LINK_SPEED_AUTONEG)
+	if ((conf_link_speed & RTE_ETH_LINK_SPEED_FIXED) == RTE_ETH_LINK_SPEED_AUTONEG)
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH;
 
 	switch (conf_link_speed) {
-	case ETH_LINK_SPEED_10M_HD:
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		return HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF;
 	}
@@ -2977,51 +2977,51 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 {
 	uint16_t eth_link_speed = 0;
 
-	if (conf_link_speed == ETH_LINK_SPEED_AUTONEG)
-		return ETH_LINK_SPEED_AUTONEG;
+	if (conf_link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
+		return RTE_ETH_LINK_SPEED_AUTONEG;
 
-	switch (conf_link_speed & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_100M:
-	case ETH_LINK_SPEED_100M_HD:
+	switch (conf_link_speed & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		/* FALLTHROUGH */
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_100MB;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_1GB;
 		break;
-	case ETH_LINK_SPEED_2_5G:
+	case RTE_ETH_LINK_SPEED_2_5G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_2_5GB;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_10GB;
 		break;
-	case ETH_LINK_SPEED_20G:
+	case RTE_ETH_LINK_SPEED_20G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_20GB;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_25GB;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_40GB;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_50GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_50GB;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		eth_link_speed = pam4_link ?
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_100GB :
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEED_100GB;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		eth_link_speed =
 			HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 		break;
@@ -3034,11 +3034,11 @@ static uint16_t bnxt_parse_eth_link_speed(uint32_t conf_link_speed,
 	return eth_link_speed;
 }
 
-#define BNXT_SUPPORTED_SPEEDS (ETH_LINK_SPEED_100M | ETH_LINK_SPEED_100M_HD | \
-		ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G | \
-		ETH_LINK_SPEED_10G | ETH_LINK_SPEED_20G | ETH_LINK_SPEED_25G | \
-		ETH_LINK_SPEED_40G | ETH_LINK_SPEED_50G | \
-		ETH_LINK_SPEED_100G | ETH_LINK_SPEED_200G)
+#define BNXT_SUPPORTED_SPEEDS (RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_100M_HD | \
+		RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G | \
+		RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_20G | RTE_ETH_LINK_SPEED_25G | \
+		RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_50G | \
+		RTE_ETH_LINK_SPEED_100G | RTE_ETH_LINK_SPEED_200G)
 
 static int bnxt_validate_link_speed(struct bnxt *bp)
 {
@@ -3047,13 +3047,13 @@ static int bnxt_validate_link_speed(struct bnxt *bp)
 	uint32_t link_speed_capa;
 	uint32_t one_speed;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG)
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG)
 		return 0;
 
 	link_speed_capa = bnxt_get_speed_capabilities(bp);
 
-	if (link_speed & ETH_LINK_SPEED_FIXED) {
-		one_speed = link_speed & ~ETH_LINK_SPEED_FIXED;
+	if (link_speed & RTE_ETH_LINK_SPEED_FIXED) {
+		one_speed = link_speed & ~RTE_ETH_LINK_SPEED_FIXED;
 
 		if (one_speed & (one_speed - 1)) {
 			PMD_DRV_LOG(ERR,
@@ -3083,71 +3083,71 @@ bnxt_parse_eth_link_speed_mask(struct bnxt *bp, uint32_t link_speed)
 {
 	uint16_t ret = 0;
 
-	if (link_speed == ETH_LINK_SPEED_AUTONEG) {
+	if (link_speed == RTE_ETH_LINK_SPEED_AUTONEG) {
 		if (bp->link_info->support_speeds)
 			return bp->link_info->support_speeds;
 		link_speed = BNXT_SUPPORTED_SPEEDS;
 	}
 
-	if (link_speed & ETH_LINK_SPEED_100M)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_100M_HD)
+	if (link_speed & RTE_ETH_LINK_SPEED_100M_HD)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100MB;
-	if (link_speed & ETH_LINK_SPEED_1G)
+	if (link_speed & RTE_ETH_LINK_SPEED_1G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_1GB;
-	if (link_speed & ETH_LINK_SPEED_2_5G)
+	if (link_speed & RTE_ETH_LINK_SPEED_2_5G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_2_5GB;
-	if (link_speed & ETH_LINK_SPEED_10G)
+	if (link_speed & RTE_ETH_LINK_SPEED_10G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_10GB;
-	if (link_speed & ETH_LINK_SPEED_20G)
+	if (link_speed & RTE_ETH_LINK_SPEED_20G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_20GB;
-	if (link_speed & ETH_LINK_SPEED_25G)
+	if (link_speed & RTE_ETH_LINK_SPEED_25G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_25GB;
-	if (link_speed & ETH_LINK_SPEED_40G)
+	if (link_speed & RTE_ETH_LINK_SPEED_40G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_40GB;
-	if (link_speed & ETH_LINK_SPEED_50G)
+	if (link_speed & RTE_ETH_LINK_SPEED_50G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_50GB;
-	if (link_speed & ETH_LINK_SPEED_100G)
+	if (link_speed & RTE_ETH_LINK_SPEED_100G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEED_MASK_100GB;
-	if (link_speed & ETH_LINK_SPEED_200G)
+	if (link_speed & RTE_ETH_LINK_SPEED_200G)
 		ret |= HWRM_PORT_PHY_CFG_INPUT_FORCE_PAM4_LINK_SPEED_200GB;
 	return ret;
 }
 
 static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 {
-	uint32_t eth_link_speed = ETH_SPEED_NUM_NONE;
+	uint32_t eth_link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	switch (hw_link_speed) {
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100MB:
-		eth_link_speed = ETH_SPEED_NUM_100M;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_1GB:
-		eth_link_speed = ETH_SPEED_NUM_1G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2_5GB:
-		eth_link_speed = ETH_SPEED_NUM_2_5G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10GB:
-		eth_link_speed = ETH_SPEED_NUM_10G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_20GB:
-		eth_link_speed = ETH_SPEED_NUM_20G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_25GB:
-		eth_link_speed = ETH_SPEED_NUM_25G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_40GB:
-		eth_link_speed = ETH_SPEED_NUM_40G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_50GB:
-		eth_link_speed = ETH_SPEED_NUM_50G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB:
-		eth_link_speed = ETH_SPEED_NUM_100G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB:
-		eth_link_speed = ETH_SPEED_NUM_200G;
+		eth_link_speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	case HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_2GB:
 	default:
@@ -3160,16 +3160,16 @@ static uint32_t bnxt_parse_hw_link_speed(uint16_t hw_link_speed)
 
 static uint16_t bnxt_parse_hw_link_duplex(uint16_t hw_link_duplex)
 {
-	uint16_t eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+	uint16_t eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (hw_link_duplex) {
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_BOTH:
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_FULL:
 		/* FALLTHROUGH */
-		eth_link_duplex = ETH_LINK_FULL_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case HWRM_PORT_PHY_CFG_INPUT_AUTO_DUPLEX_HALF:
-		eth_link_duplex = ETH_LINK_HALF_DUPLEX;
+		eth_link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "HWRM link duplex %d not defined\n",
@@ -3198,12 +3198,12 @@ int bnxt_get_hwrm_link_config(struct bnxt *bp, struct rte_eth_link *link)
 		link->link_speed =
 			bnxt_parse_hw_link_speed(link_info->link_speed);
 	else
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 	link->link_duplex = bnxt_parse_hw_link_duplex(link_info->duplex);
 	link->link_status = link_info->link_up;
 	link->link_autoneg = link_info->auto_mode ==
 		HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_MODE_NONE ?
-		ETH_LINK_FIXED : ETH_LINK_AUTONEG;
+		RTE_ETH_LINK_FIXED : RTE_ETH_LINK_AUTONEG;
 exit:
 	return rc;
 }
@@ -3229,7 +3229,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up)
 
 	autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds);
 	if (BNXT_CHIP_P5(bp) &&
-	    dev_conf->link_speeds == ETH_LINK_SPEED_40G) {
+	    dev_conf->link_speeds == RTE_ETH_LINK_SPEED_40G) {
 		/* 40G is not supported as part of media auto detect.
 		 * The speed should be forced and autoneg disabled
 		 * to configure 40G speed.
@@ -3320,7 +3320,7 @@ int bnxt_hwrm_func_qcfg(struct bnxt *bp, uint16_t *mtu)
 
 	HWRM_CHECK_RESULT();
 
-	bp->vlan = rte_le_to_cpu_16(resp->vlan) & ETH_VLAN_ID_MAX;
+	bp->vlan = rte_le_to_cpu_16(resp->vlan) & RTE_ETH_VLAN_ID_MAX;
 
 	svif_info = rte_le_to_cpu_16(resp->svif_info);
 	if (svif_info & HWRM_FUNC_QCFG_OUTPUT_SVIF_INFO_SVIF_VALID)
diff --git a/drivers/net/bnxt/bnxt_reps.c b/drivers/net/bnxt/bnxt_reps.c
index bdbad53b7d7f..a9f5e13476b0 100644
--- a/drivers/net/bnxt/bnxt_reps.c
+++ b/drivers/net/bnxt/bnxt_reps.c
@@ -536,7 +536,7 @@ int bnxt_rep_dev_info_get_op(struct rte_eth_dev *eth_dev,
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (parent_bp->flags & BNXT_FLAG_PTP_SUPPORTED)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev_info->tx_offload_capa = BNXT_DEV_TX_OFFLOAD_SUPPORT;
 	dev_info->flow_type_rss_offloads = BNXT_ETH_RSS_SUPPORT;
 
diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c
index 957b175f1b89..632a611bf612 100644
--- a/drivers/net/bnxt/bnxt_ring.c
+++ b/drivers/net/bnxt/bnxt_ring.c
@@ -185,7 +185,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 	int tpa_info_start = ag_bitmap_start + ag_bitmap_len;
 	int tpa_info_len = 0;
 
-	if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (rx_ring_info && (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		int tpa_max = BNXT_TPA_MAX_AGGS(bp);
 
 		tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info);
@@ -278,7 +278,7 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx,
 				    ag_bitmap_start, ag_bitmap_len);
 
 		/* TPA info */
-		if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 			rx_ring_info->tpa_info =
 				((struct bnxt_tpa_info *)((char *)mz->addr +
 							  tpa_info_start));
diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c
index bbcb3b06e7df..0ac3a2b3b7d3 100644
--- a/drivers/net/bnxt/bnxt_rxq.c
+++ b/drivers/net/bnxt/bnxt_rxq.c
@@ -41,13 +41,13 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 	bp->nr_vnics = 0;
 
 	/* Multi-queue mode */
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB_RSS) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 		/* VMDq ONLY, VMDq+RSS, VMDq+DCB, VMDq+DCB+RSS */
 
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_RSS:
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* FALLTHROUGH */
 			/* ETH_8/64_POOLs */
 			pools = conf->nb_queue_pools;
@@ -55,14 +55,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 			max_pools = RTE_MIN(bp->max_vnics,
 					    RTE_MIN(bp->max_l2_ctx,
 					    RTE_MIN(bp->max_rsscos_ctx,
-						    ETH_64_POOLS)));
+						    RTE_ETH_64_POOLS)));
 			PMD_DRV_LOG(DEBUG,
 				    "pools = %u max_pools = %u\n",
 				    pools, max_pools);
 			if (pools > max_pools)
 				pools = max_pools;
 			break;
-		case ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_RSS:
 			pools = bp->rx_cosq_cnt ? bp->rx_cosq_cnt : 1;
 			break;
 		default:
@@ -100,7 +100,7 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 				    ring_idx, rxq, i, vnic);
 		}
 		if (i == 0) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB) {
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB) {
 				bp->eth_dev->data->promiscuous = 1;
 				vnic->flags |= BNXT_VNIC_INFO_PROMISC;
 			}
@@ -110,8 +110,8 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 		vnic->end_grp_id = end_grp_id;
 
 		if (i) {
-			if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_DCB ||
-			    !(dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS))
+			if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_DCB ||
+			    !(dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS))
 				vnic->rss_dflt_cr = true;
 			goto skip_filter_allocation;
 		}
@@ -136,14 +136,14 @@ int bnxt_mq_rx_configure(struct bnxt *bp)
 
 	bp->rx_num_qs_per_vnic = nb_q_per_grp;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		struct rte_eth_rss_conf *rss = &dev_conf->rx_adv_conf.rss_conf;
 
 		if (bp->flags & BNXT_FLAG_UPDATE_HASH)
 			bp->flags &= ~BNXT_FLAG_UPDATE_HASH;
 
 		for (i = 0; i < bp->nr_vnics; i++) {
-			uint32_t lvl = ETH_RSS_LEVEL(rss->rss_hf);
+			uint32_t lvl = RTE_ETH_RSS_LEVEL(rss->rss_hf);
 
 			vnic = &bp->vnic_info[i];
 			vnic->hash_type =
@@ -338,7 +338,7 @@ int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev,
 	PMD_DRV_LOG(DEBUG, "RX Buf size is %d\n", rxq->rx_buf_size);
 	rxq->queue_id = queue_idx;
 	rxq->port_id = eth_dev->data->port_id;
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -454,7 +454,7 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	}
 	PMD_DRV_LOG(INFO, "Rx queue started %d\n", rx_queue_id);
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		vnic = rxq->vnic;
 
 		if (BNXT_HAS_RING_GRPS(bp)) {
@@ -525,7 +525,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id)
 	rxq->rx_started = false;
 	PMD_DRV_LOG(DEBUG, "Rx queue stopped\n");
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (BNXT_HAS_RING_GRPS(bp))
 			vnic->fw_grp_ids[rx_queue_id] = INVALID_HW_RING_ID;
 
diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c
index 73fbdd17d126..0909bab89b76 100644
--- a/drivers/net/bnxt/bnxt_rxr.c
+++ b/drivers/net/bnxt/bnxt_rxr.c
@@ -566,8 +566,8 @@ bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq)
 	dev_conf = &rxq->bp->eth_dev->data->dev_conf;
 	offloads = dev_conf->rxmode.offloads;
 
-	outer_cksum_enabled = !!(offloads & (DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_OUTER_UDP_CKSUM));
+	outer_cksum_enabled = !!(offloads & (RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM));
 
 	/* Initialize ol_flags table. */
 	pt = rxr->ol_flags_table;
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
index d08854ff61e2..e4905b4fd169 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_avx2.c
@@ -416,7 +416,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_common.h b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
index 9b9489a695a2..0627fd212d0a 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_common.h
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_common.h
@@ -96,7 +96,7 @@ bnxt_rxq_rearm(struct bnxt_rx_queue *rxq, struct bnxt_rx_ring_info *rxr)
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static inline void
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
index 13211060cf0e..f15e2d3b4ed4 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_neon.c
@@ -352,7 +352,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
index 6e563053260a..ffd560166cac 100644
--- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
+++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c
@@ -333,7 +333,7 @@ bnxt_handle_tx_cp_vec(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_vec_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp_vec(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 47824334ae3e..401dd83f4e7d 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -350,7 +350,7 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
 }
 
 /*
- * Transmit completion function for use when DEV_TX_OFFLOAD_MBUF_FAST_FREE
+ * Transmit completion function for use when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
  * is enabled.
  */
 static void bnxt_tx_cmp_fast(struct bnxt_tx_queue *txq, int nr_pkts)
@@ -476,7 +476,7 @@ static int bnxt_handle_tx_cp(struct bnxt_tx_queue *txq)
 	} while (nb_tx_pkts < ring_mask);
 
 	if (nb_tx_pkts) {
-		if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			bnxt_tx_cmp_fast(txq, nb_tx_pkts);
 		else
 			bnxt_tx_cmp(txq, nb_tx_pkts);
diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c
index 26253a7e17f2..c63cf4b943fa 100644
--- a/drivers/net/bnxt/bnxt_vnic.c
+++ b/drivers/net/bnxt/bnxt_vnic.c
@@ -239,17 +239,17 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 {
 	uint16_t hwrm_type = 0;
 
-	if (rte_type & ETH_RSS_IPV4)
+	if (rte_type & RTE_ETH_RSS_IPV4)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV4;
-	if (rte_type & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV4;
-	if (rte_type & ETH_RSS_IPV6)
+	if (rte_type & RTE_ETH_RSS_IPV6)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_TCP_IPV6;
-	if (rte_type & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rte_type & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		hwrm_type |= HWRM_VNIC_RSS_CFG_INPUT_HASH_TYPE_UDP_IPV6;
 
 	return hwrm_type;
@@ -258,11 +258,11 @@ uint16_t bnxt_rte_to_hwrm_hash_types(uint64_t rte_type)
 int bnxt_rte_to_hwrm_hash_level(struct bnxt *bp, uint64_t hash_f, uint32_t lvl)
 {
 	uint32_t mode = HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_DEFAULT;
-	bool l3 = (hash_f & (ETH_RSS_IPV4 | ETH_RSS_IPV6));
-	bool l4 = (hash_f & (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP));
+	bool l3 = (hash_f & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6));
+	bool l4 = (hash_f & (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP));
 	bool l3_only = l3 && !l4;
 	bool l3_and_l4 = l3 && l4;
 
@@ -307,16 +307,16 @@ uint64_t bnxt_hwrm_to_rte_rss_level(struct bnxt *bp, uint32_t mode)
 	 * return default hash mode.
 	 */
 	if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_OUTER_RSS))
-		return ETH_RSS_LEVEL_PMD_DEFAULT;
+		return RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_2 ||
 	    mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_OUTERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_OUTERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_OUTERMOST;
 	else if (mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_2 ||
 		 mode == HWRM_VNIC_RSS_CFG_INPUT_HASH_MODE_FLAGS_INNERMOST_4)
-		rss_level |= ETH_RSS_LEVEL_INNERMOST;
+		rss_level |= RTE_ETH_RSS_LEVEL_INNERMOST;
 	else
-		rss_level |= ETH_RSS_LEVEL_PMD_DEFAULT;
+		rss_level |= RTE_ETH_RSS_LEVEL_PMD_DEFAULT;
 
 	return rss_level;
 }
diff --git a/drivers/net/bnxt/rte_pmd_bnxt.c b/drivers/net/bnxt/rte_pmd_bnxt.c
index f71543810970..77ecbef04c3d 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt.c
+++ b/drivers/net/bnxt/rte_pmd_bnxt.c
@@ -421,18 +421,18 @@ int rte_pmd_bnxt_set_vf_rxmode(uint16_t port, uint16_t vf,
 	if (vf >= bp->pdev->max_vfs)
 		return -EINVAL;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG) {
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG) {
 		PMD_DRV_LOG(ERR, "Currently cannot toggle this setting\n");
 		return -ENOTSUP;
 	}
 
 	/* Is this really the correct mapping?  VFd seems to think it is. */
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		flag |= BNXT_VNIC_INFO_PROMISC;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		flag |= BNXT_VNIC_INFO_BCAST;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		flag |= BNXT_VNIC_INFO_ALLMULTI | BNXT_VNIC_INFO_MCAST;
 
 	if (on)
diff --git a/drivers/net/bonding/eth_bond_private.h b/drivers/net/bonding/eth_bond_private.h
index fc179a2732ac..8b104b639184 100644
--- a/drivers/net/bonding/eth_bond_private.h
+++ b/drivers/net/bonding/eth_bond_private.h
@@ -167,8 +167,8 @@ struct bond_dev_private {
 	struct rte_eth_desc_lim tx_desc_lim;	/**< Tx descriptor limits */
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_512 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_512 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[52];				/**< 52-byte hash key buffer. */
 	uint8_t rss_key_len;				/**< hash key length in bytes. */
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 128754f4595a..20adfcf0ea9c 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -770,25 +770,25 @@ link_speed_key(uint16_t speed) {
 	uint16_t key_speed;
 
 	switch (speed) {
-	case ETH_SPEED_NUM_NONE:
+	case RTE_ETH_SPEED_NUM_NONE:
 		key_speed = 0x00;
 		break;
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		key_speed = BOND_LINK_SPEED_KEY_10M;
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		key_speed = BOND_LINK_SPEED_KEY_100M;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		key_speed = BOND_LINK_SPEED_KEY_1000M;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		key_speed = BOND_LINK_SPEED_KEY_10G;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		key_speed = BOND_LINK_SPEED_KEY_20G;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		key_speed = BOND_LINK_SPEED_KEY_40G;
 		break;
 	default:
@@ -866,7 +866,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
 
 		if (ret >= 0 && link_info.link_status != 0) {
 			key = link_speed_key(link_info.link_speed) << 1;
-			if (link_info.link_duplex == ETH_LINK_FULL_DUPLEX)
+			if (link_info.link_duplex == RTE_ETH_LINK_FULL_DUPLEX)
 				key |= BOND_LINK_FULL_DUPLEX_KEY;
 		} else {
 			key = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index eb8d15d16034..a6fe0304c648 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -204,7 +204,7 @@ slave_vlan_filter_set(uint16_t bonded_port_id, uint16_t slave_port_id)
 
 	bonded_eth_dev = &rte_eth_devices[bonded_port_id];
 	if ((bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER) == 0)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER) == 0)
 		return 0;
 
 	internals = bonded_eth_dev->data->dev_private;
@@ -586,7 +586,7 @@ __eth_bond_slave_add_lock_free(uint16_t bonded_port_id, uint16_t slave_port_id)
 			return -1;
 		}
 
-		 if (link_props.link_status == ETH_LINK_UP) {
+		if (link_props.link_status == RTE_ETH_LINK_UP) {
 			if (internals->active_slave_count == 0 &&
 			    !internals->user_defined_primary_port)
 				bond_ethdev_primary_set(internals,
@@ -721,7 +721,7 @@ __eth_bond_slave_remove_lock_free(uint16_t bonded_port_id,
 		internals->tx_offload_capa = 0;
 		internals->rx_queue_offload_capa = 0;
 		internals->tx_queue_offload_capa = 0;
-		internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+		internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 		internals->reta_size = 0;
 		internals->candidate_max_rx_pktlen = 0;
 		internals->max_rx_pktlen = 0;
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index a6755661c49c..a2903366a3f6 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -1373,8 +1373,8 @@ link_properties_set(struct rte_eth_dev *ethdev, struct rte_eth_link *slave_link)
 		 * In any other mode the link properties are set to default
 		 * values of AUTONEG/DUPLEX
 		 */
-		ethdev->data->dev_link.link_autoneg = ETH_LINK_AUTONEG;
-		ethdev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		ethdev->data->dev_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
+		ethdev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	}
 }
 
@@ -1704,7 +1704,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 		slave_eth_dev->data->dev_conf.intr_conf.lsc = 1;
 
 	/* If RSS is enabled for bonding, try to enable it for slaves  */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (internals->rss_key_len != 0) {
 			slave_eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len =
 					internals->rss_key_len;
@@ -1721,23 +1721,23 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		slave_eth_dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_VLAN_FILTER;
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	else
 		slave_eth_dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_VLAN_FILTER;
+				~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	slave_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len =
 			bonded_eth_dev->data->dev_conf.rxmode.max_rx_pkt_len;
 
 	if (bonded_eth_dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_JUMBO_FRAME)
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		slave_eth_dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_JUMBO_FRAME;
+				RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		slave_eth_dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+				~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	nb_rx_queues = bonded_eth_dev->data->nb_rx_queues;
 	nb_tx_queues = bonded_eth_dev->data->nb_tx_queues;
@@ -1838,7 +1838,7 @@ slave_configure(struct rte_eth_dev *bonded_eth_dev,
 	}
 
 	/* If RSS is enabled for bonding, synchronize RETA */
-	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (bonded_eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		int i;
 		struct bond_dev_private *internals;
 
@@ -1961,7 +1961,7 @@ bond_ethdev_start(struct rte_eth_dev *eth_dev)
 		return -1;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 1;
 
 	internals = eth_dev->data->dev_private;
@@ -2101,7 +2101,7 @@ bond_ethdev_stop(struct rte_eth_dev *eth_dev)
 			tlb_last_obytets[internals->active_slaves[i]] = 0;
 	}
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	eth_dev->data->dev_started = 0;
 
 	internals->link_status_polling_enabled = 0;
@@ -2423,15 +2423,15 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 
 	bond_ctx = ethdev->data->dev_private;
 
-	ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+	ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	if (ethdev->data->dev_started == 0 ||
 			bond_ctx->active_slave_count == 0) {
-		ethdev->data->dev_link.link_status = ETH_LINK_DOWN;
+		ethdev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 
-	ethdev->data->dev_link.link_status = ETH_LINK_UP;
+	ethdev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	if (wait_to_complete)
 		link_update = rte_eth_link_get;
@@ -2456,7 +2456,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 					  &slave_link);
 			if (ret < 0) {
 				ethdev->data->dev_link.link_speed =
-					ETH_SPEED_NUM_NONE;
+					RTE_ETH_SPEED_NUM_NONE;
 				RTE_BOND_LOG(ERR,
 					"Slave (port %u) link get failed: %s",
 					bond_ctx->active_slaves[idx],
@@ -2498,7 +2498,7 @@ bond_ethdev_link_update(struct rte_eth_dev *ethdev, int wait_to_complete)
 		 * In theses mode the maximum theoretical link speed is the sum
 		 * of all the slaves
 		 */
-		ethdev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		ethdev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		one_link_update_succeeded = false;
 
 		for (idx = 0; idx < bond_ctx->active_slave_count; idx++) {
@@ -2872,7 +2872,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 			goto link_update;
 
 		/* check link state properties if bonded link is up*/
-		if (bonded_eth_dev->data->dev_link.link_status == ETH_LINK_UP) {
+		if (bonded_eth_dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 			if (link_properties_valid(bonded_eth_dev, &link) != 0)
 				RTE_BOND_LOG(ERR, "Invalid link properties "
 					     "for slave %d in bonding mode %d",
@@ -2888,7 +2888,7 @@ bond_ethdev_lsc_event_callback(uint16_t port_id, enum rte_eth_event_type type,
 		if (internals->active_slave_count < 1) {
 			/* If first active slave, then change link status */
 			bonded_eth_dev->data->dev_link.link_status =
-								ETH_LINK_UP;
+								RTE_ETH_LINK_UP;
 			internals->current_primary_port = port_id;
 			lsc_flag = 1;
 
@@ -2980,12 +2980,12 @@ bond_ethdev_rss_reta_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	reta_count = (reta_size + RTE_RETA_GROUP_SIZE - 1) /
-			RTE_RETA_GROUP_SIZE;
+	reta_count = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) /
+			RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < reta_count; i++) {
 		internals->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internals->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -3018,8 +3018,8 @@ bond_ethdev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	 /* Copy RETA table */
-	for (i = 0; i < reta_size / RTE_RETA_GROUP_SIZE; i++)
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < reta_size / RTE_ETH_RETA_GROUP_SIZE; i++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internals->reta_conf[i].reta[j];
 
@@ -3279,7 +3279,7 @@ bond_alloc(struct rte_vdev_device *dev, uint8_t mode)
 	internals->max_rx_pktlen = 0;
 
 	/* Initially allow to choose any offload type */
-	internals->flow_type_rss_offloads = ETH_RSS_PROTO_MASK;
+	internals->flow_type_rss_offloads = RTE_ETH_RSS_PROTO_MASK;
 
 	memset(&internals->default_rxconf, 0,
 	       sizeof(internals->default_rxconf));
@@ -3508,7 +3508,7 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 	 * set key to the the value specified in port RSS configuration.
 	 * Fall back to default RSS key if the key is not specified
 	 */
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS) {
 		if (dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key != NULL) {
 			internals->rss_key_len =
 				dev->data->dev_conf.rx_adv_conf.rss_conf.rss_key_len;
@@ -3523,9 +3523,9 @@ bond_ethdev_configure(struct rte_eth_dev *dev)
 
 		for (i = 0; i < RTE_DIM(internals->reta_conf); i++) {
 			internals->reta_conf[i].mask = ~0LL;
-			for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+			for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 				internals->reta_conf[i].reta[j] =
-						(i * RTE_RETA_GROUP_SIZE + j) %
+						(i * RTE_ETH_RETA_GROUP_SIZE + j) %
 						dev->data->nb_rx_queues;
 		}
 	}
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 7caec6cf14c8..9a09748673b2 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -15,22 +15,22 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
@@ -69,36 +69,36 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
diff --git a/drivers/net/cnxk/cn10k_rx.c b/drivers/net/cnxk/cn10k_rx.c
index 69e767ac3dd6..e3b1bd8ad225 100644
--- a/drivers/net/cnxk/cn10k_rx.c
+++ b/drivers/net/cnxk/cn10k_rx.c
@@ -76,12 +76,12 @@ cn10k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn10k_tx.c b/drivers/net/cnxk/cn10k_tx.c
index 0e1276c60ba2..f63b8fabefd4 100644
--- a/drivers/net/cnxk/cn10k_tx.c
+++ b/drivers/net/cnxk/cn10k_tx.c
@@ -77,11 +77,11 @@ cn10k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index 115e678916bb..9ff2d3dc114a 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -15,22 +15,22 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-	    (dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+	    (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_IPV4_CKSUM | DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	    (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
 	if (!dev->ptype_disable)
@@ -69,36 +69,36 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM || conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
@@ -277,9 +277,9 @@ cn9k_nix_configure(struct rte_eth_dev *eth_dev)
 
 	/* Platform specific checks */
 	if ((roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	     (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	     (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		plt_err("Outer IP and SCTP checksum unsupported");
 		return -EINVAL;
 	}
@@ -530,17 +530,17 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
 	 * TSO not supported for earlier chip revisions
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0())
-		dev->tx_offload_capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-					  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-					  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		dev->tx_offload_capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+					  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 
 	/* 50G and 100G to be supported for board version C0
 	 * and above of CN9K.
 	 */
 	if (roc_model_is_cn96_a0() || roc_model_is_cn95_a0()) {
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_50G;
-		dev->speed_capa &= ~(uint64_t)ETH_LINK_SPEED_100G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_50G;
+		dev->speed_capa &= ~(uint64_t)RTE_ETH_LINK_SPEED_100G;
 	}
 
 	dev->hwcap = 0;
diff --git a/drivers/net/cnxk/cn9k_rx.c b/drivers/net/cnxk/cn9k_rx.c
index 7d9f1bd61f79..08ee28658bce 100644
--- a/drivers/net/cnxk/cn9k_rx.c
+++ b/drivers/net/cnxk/cn9k_rx.c
@@ -76,12 +76,12 @@ cn9k_eth_set_rx_function(struct rte_eth_dev *eth_dev)
 			nix_eth_rx_burst_mseg[0][0][0][0][0][0];
 
 	if (dev->scalar_ena) {
-		if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 			return pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 		return pick_rx_func(eth_dev, nix_eth_rx_burst);
 	}
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		return pick_rx_func(eth_dev, nix_eth_rx_vec_burst_mseg);
 	return pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 }
diff --git a/drivers/net/cnxk/cn9k_tx.c b/drivers/net/cnxk/cn9k_tx.c
index 763f9a14fd79..f35ae8e70438 100644
--- a/drivers/net/cnxk/cn9k_tx.c
+++ b/drivers/net/cnxk/cn9k_tx.c
@@ -76,11 +76,11 @@ cn9k_eth_set_tx_function(struct rte_eth_dev *eth_dev)
 
 	if (dev->scalar_ena) {
 		pick_tx_func(eth_dev, nix_eth_tx_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 	} else {
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-		if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+		if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 			pick_tx_func(eth_dev, nix_eth_tx_vec_burst_mseg);
 	}
 
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 0e3652ed5109..f6b75645bb69 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -10,7 +10,7 @@ nix_get_rx_offload_capa(struct cnxk_eth_dev *dev)
 
 	if (roc_nix_is_vf_or_sdp(&dev->nix) ||
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -28,11 +28,11 @@ nix_get_speed_capa(struct cnxk_eth_dev *dev)
 	uint32_t speed_capa;
 
 	/* Auto negotiation disabled */
-	speed_capa = ETH_LINK_SPEED_FIXED;
+	speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!roc_nix_is_vf_or_sdp(&dev->nix) && !roc_nix_is_lbk(&dev->nix)) {
-		speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			      ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			      ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			      RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			      RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return speed_capa;
@@ -54,8 +54,8 @@ nix_enable_mseg_on_jumbo(struct cnxk_eth_rxq_sp *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 }
 
@@ -90,7 +90,7 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	struct rte_eth_fc_conf fc_conf = {0};
 	int rc;
 
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = cnxk_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -98,10 +98,10 @@ nix_init_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
@@ -122,11 +122,11 @@ nix_update_flow_ctrl_config(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (roc_model_is_cn96_ax() &&
 	    dev->npc.switch_header_type != ROC_PRIV_FLAGS_HIGIG &&
-	    (fc_cfg.mode == RTE_FC_FULL || fc_cfg.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_cfg.mode == RTE_ETH_FC_FULL || fc_cfg.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_cfg.mode =
-				(fc_cfg.mode == RTE_FC_FULL ||
-				fc_cfg.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_cfg.mode == RTE_ETH_FC_FULL ||
+				fc_cfg.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return cnxk_nix_flow_ctrl_set(eth_dev, &fc_cfg);
@@ -169,7 +169,7 @@ nix_sq_max_sqe_sz(struct cnxk_eth_dev *dev)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -361,7 +361,7 @@ cnxk_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t qid,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en) {
 		rc = cnxk_nix_tsc_convert(dev);
 		if (rc) {
 			plt_err("Failed to calculate delta and freq mult");
@@ -434,24 +434,24 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->ethdev_rss_hf = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc.switch_header_type == ROC_PRIV_FLAGS_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -460,34 +460,34 @@ cnxk_rss_ethdev_to_nix(struct cnxk_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -513,7 +513,7 @@ nix_rss_default_setup(struct cnxk_eth_dev *dev)
 	uint64_t rss_hf;
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 
@@ -729,8 +729,8 @@ nix_lso_fmt_setup(struct cnxk_eth_dev *dev)
 
 	/* Nothing much to do if offload is not enabled */
 	if (!(dev->tx_offloads &
-	      (DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	       DEV_TX_OFFLOAD_GENEVE_TNL_TSO | DEV_TX_OFFLOAD_GRE_TNL_TSO)))
+	      (RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	       RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))
 		return 0;
 
 	/* Setup LSO formats in AF. Its a no-op if other ethdev has
@@ -778,13 +778,13 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		plt_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
@@ -814,7 +814,7 @@ cnxk_nix_configure(struct rte_eth_dev *eth_dev)
 	/* Prepare rx cfg */
 	rx_cfg = ROC_NIX_LF_RX_CFG_DIS_APAD;
 	if (dev->rx_offloads &
-	    (DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	    (RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_OL4;
 		rx_cfg |= ROC_NIX_LF_RX_CFG_CSUM_IL4;
 	}
@@ -1191,12 +1191,12 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled on PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct cnxk_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
 		cnxk_eth_dev_ops.timesync_enable(eth_dev);
 	else
 		cnxk_eth_dev_ops.timesync_disable(eth_dev);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) {
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
 		rc = rte_mbuf_dyn_rx_timestamp_register
 			(&dev->tstamp.tstamp_dynfield_offset,
 			 &dev->tstamp.rx_tstamp_dynflag);
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 2528b3cdaa0c..53a657f8865d 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -54,41 +54,44 @@
 	 CNXK_NIX_TX_NB_SEG_MAX)
 
 #define CNXK_NIX_RSS_L3_L4_SRC_DST                                             \
-	(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY |     \
-	 ETH_RSS_L4_DST_ONLY)
+	(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |                   \
+	 RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 #define CNXK_NIX_RSS_OFFLOAD                                                   \
-	(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP |               \
-	 ETH_RSS_SCTP | ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD |                  \
-	 CNXK_NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | ETH_RSS_C_VLAN)
+	(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |                 \
+	 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | RTE_ETH_RSS_TUNNEL |             \
+	 RTE_ETH_RSS_L2_PAYLOAD | CNXK_NIX_RSS_L3_L4_SRC_DST |                 \
+	 RTE_ETH_RSS_LEVEL_MASK | RTE_ETH_RSS_C_VLAN)
 
 #define CNXK_NIX_TX_OFFLOAD_CAPA                                               \
-	(DEV_TX_OFFLOAD_MBUF_FAST_FREE | DEV_TX_OFFLOAD_MT_LOCKFREE |          \
-	 DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT |             \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |                 \
-	 DEV_TX_OFFLOAD_SCTP_CKSUM | DEV_TX_OFFLOAD_TCP_TSO |                  \
-	 DEV_TX_OFFLOAD_VXLAN_TNL_TSO | DEV_TX_OFFLOAD_GENEVE_TNL_TSO |        \
-	 DEV_TX_OFFLOAD_GRE_TNL_TSO | DEV_TX_OFFLOAD_MULTI_SEGS |              \
-	 DEV_TX_OFFLOAD_IPV4_CKSUM)
+	(RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |          \
+	 RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT |             \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |    \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |                 \
+	 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_TSO |                  \
+	 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |        \
+	 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS |              \
+	 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 
 #define CNXK_NIX_RX_OFFLOAD_CAPA                                               \
-	(DEV_RX_OFFLOAD_CHECKSUM | DEV_RX_OFFLOAD_SCTP_CKSUM |                 \
-	 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | DEV_RX_OFFLOAD_SCATTER |            \
-	 DEV_RX_OFFLOAD_JUMBO_FRAME | DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |         \
-	 DEV_RX_OFFLOAD_RSS_HASH | DEV_RX_OFFLOAD_TIMESTAMP |                  \
-	 DEV_RX_OFFLOAD_VLAN_STRIP)
+	(RTE_ETH_RX_OFFLOAD_CHECKSUM | RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |                 \
+	 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_SCATTER |            \
+	 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |         \
+	 RTE_ETH_RX_OFFLOAD_RSS_HASH | RTE_ETH_RX_OFFLOAD_TIMESTAMP |                  \
+	 RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 
 #define RSS_IPV4_ENABLE                                                        \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP |         \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_SCTP)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV4_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE                                                        \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP |         \
-	 ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_SCTP)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |                            \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |         \
+	 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE                                                     \
-	(ETH_RSS_IPV6_EX | ETH_RSS_IPV6_TCP_EX | ETH_RSS_IPV6_UDP_EX)
+	(RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_IPV6_TCP_EX | RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS 3
 
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index 37720fb0954e..bf0c6d6b4ad8 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -49,11 +49,11 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
 		val = ROC_NIX_RSS_RETA_SZ_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
 		val = ROC_NIX_RSS_RETA_SZ_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
 		val = ROC_NIX_RSS_RETA_SZ_256;
 	else
 		val = ROC_NIX_RSS_RETA_SZ_64;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index b6cc5286c6d0..0f6817f75d4a 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -81,25 +81,25 @@ cnxk_nix_rx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-		{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
-		{DEV_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
-		{DEV_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
-		{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
-		{DEV_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
-		{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
-		{DEV_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
-		{DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
-		{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-		{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-		{DEV_RX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
-		{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
-		{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+		{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_TCP_LRO, " TCP LRO,"},
+		{RTE_ETH_RX_OFFLOAD_QINQ_STRIP, " QinQ VLAN Strip,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_MACSEC_STRIP, " MACsec Strip,"},
+		{RTE_ETH_RX_OFFLOAD_HEADER_SPLIT, " Header Split,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN Filter,"},
+		{RTE_ETH_RX_OFFLOAD_VLAN_EXTEND, " VLAN Extend,"},
+		{RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo Frame,"},
+		{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+		{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+		{RTE_ETH_RX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_RX_OFFLOAD_KEEP_CRC, " Keep CRC,"},
+		{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP,"},
+		{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
 						 "Scalar, Rx Offloads:"
@@ -143,28 +143,28 @@ cnxk_nix_tx_burst_mode_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-		{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-		{DEV_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
-		{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
-		{DEV_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
-		{DEV_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
-		{DEV_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
-		{DEV_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
-		{DEV_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
-		{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
-		{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
-		{DEV_TX_OFFLOAD_SECURITY, " Security,"},
-		{DEV_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
-		{DEV_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
-		{DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
+		{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " Inner IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_TCP_TSO, " TCP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TSO, " UDP TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPv4 Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_QINQ_INSERT, " QinQ VLAN Insert,"},
+		{RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO, " VXLAN Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO, " GRE Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO, " IP-in-IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO, " Geneve Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_MACSEC_INSERT, " MACsec Insert,"},
+		{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " Multi Thread Lockless Tx,"},
+		{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"},
+		{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " H/W MBUF Free,"},
+		{RTE_ETH_TX_OFFLOAD_SECURITY, " Security,"},
+		{RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO, " UDP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_IP_TNL_TSO, " IP Tunnel TSO,"},
+		{RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP Checksum,"},
+		{RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP, " Timestamp,"}
 	};
 	static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
 						 "Scalar, Tx Offloads:"
@@ -204,8 +204,8 @@ cnxk_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
 	enum rte_eth_fc_mode mode_map[] = {
-					   RTE_FC_NONE, RTE_FC_RX_PAUSE,
-					   RTE_FC_TX_PAUSE, RTE_FC_FULL
+					   RTE_ETH_FC_NONE, RTE_ETH_FC_RX_PAUSE,
+					   RTE_ETH_FC_TX_PAUSE, RTE_ETH_FC_FULL
 					  };
 	struct roc_nix *nix = &dev->nix;
 	int mode;
@@ -265,10 +265,10 @@ cnxk_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -409,13 +409,13 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		plt_err("Scatter offload is not enabled for mtu");
 		goto exit;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    frame_size > (buffsz * CNXK_NIX_RX_NB_SEG_MAX)) {
 		plt_err("Greater than maximum supported packet length");
 		goto exit;
@@ -443,9 +443,9 @@ cnxk_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	frame_size += RTE_ETHER_CRC_LEN;
 
 	if (frame_size > RTE_ETHER_MAX_LEN)
-		dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* Update max_rx_pkt_len */
 	data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -746,8 +746,8 @@ cnxk_nix_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -782,8 +782,8 @@ cnxk_nix_reta_query(struct rte_eth_dev *eth_dev,
 		goto fail;
 
 	/* Copy RETA table */
-	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (int)(dev->nix.reta_sz / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = reta[idx];
 			idx++;
@@ -816,7 +816,7 @@ cnxk_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 	if (rss_conf->rss_key)
 		roc_nix_rss_key_set(nix, rss_conf->rss_key);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
diff --git a/drivers/net/cnxk/cnxk_link.c b/drivers/net/cnxk/cnxk_link.c
index 3fdbdba49549..1cff8d56e65b 100644
--- a/drivers/net/cnxk/cnxk_link.c
+++ b/drivers/net/cnxk/cnxk_link.c
@@ -38,7 +38,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		plt_info("Port %d: Link Up - speed %u Mbps - %s",
 			 (int)(eth_dev->data->port_id),
 			 (uint32_t)link->link_speed,
-			 link->link_duplex == ETH_LINK_FULL_DUPLEX
+			 link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 				 ? "full-duplex"
 				 : "half-duplex");
 	else
@@ -66,7 +66,7 @@ cnxk_eth_dev_link_status_cb(struct roc_nix *nix, struct roc_nix_link_info *link)
 
 	eth_link.link_status = link->status;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	/* Print link info */
@@ -94,17 +94,17 @@ cnxk_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 		return 0;
 
 	if (roc_nix_is_lbk(&dev->nix)) {
-		link.link_status = ETH_LINK_UP;
-		link.link_speed = ETH_SPEED_NUM_100G;
-		link.link_autoneg = ETH_LINK_FIXED;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	} else {
 		rc = roc_nix_mac_link_info_get(&dev->nix, &info);
 		if (rc)
 			return rc;
 		link.link_status = info.status;
 		link.link_speed = info.speed;
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 		if (info.full_duplex)
 			link.link_duplex = info.full_duplex;
 	}
diff --git a/drivers/net/cnxk/cnxk_ptp.c b/drivers/net/cnxk/cnxk_ptp.c
index 449489f599c4..139fea256ccd 100644
--- a/drivers/net/cnxk/cnxk_ptp.c
+++ b/drivers/net/cnxk/cnxk_ptp.c
@@ -227,7 +227,7 @@ cnxk_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	dev->rx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 	dev->tx_tstamp_tc.cc_mask = CNXK_CYCLECOUNTER_MASK;
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	rc = roc_nix_ptp_rx_ena_dis(nix, true);
 	if (!rc) {
@@ -257,7 +257,7 @@ int
 cnxk_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 {
 	struct cnxk_eth_dev *dev = cnxk_eth_pmd_priv(eth_dev);
-	uint64_t rx_offloads = DEV_RX_OFFLOAD_TIMESTAMP;
+	uint64_t rx_offloads = RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	struct roc_nix *nix = &dev->nix;
 	int rc = 0;
 
diff --git a/drivers/net/cnxk/cnxk_rte_flow.c b/drivers/net/cnxk/cnxk_rte_flow.c
index 32c1b5dee5fa..ecdfee7b11a6 100644
--- a/drivers/net/cnxk/cnxk_rte_flow.c
+++ b/drivers/net/cnxk/cnxk_rte_flow.c
@@ -69,7 +69,7 @@ npc_rss_action_validate(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		plt_err("multi-queue mode is disabled");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/cxgbe/cxgbe.h b/drivers/net/cxgbe/cxgbe.h
index 7c89a028bf16..dee618a0db5f 100644
--- a/drivers/net/cxgbe/cxgbe.h
+++ b/drivers/net/cxgbe/cxgbe.h
@@ -28,32 +28,32 @@
 #define CXGBE_LINK_STATUS_POLL_CNT 100 /* Max number of times to poll */
 
 #define CXGBE_DEFAULT_RSS_KEY_LEN     40 /* 320-bits */
-#define CXGBE_RSS_HF_IPV4_MASK (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-				ETH_RSS_NONFRAG_IPV4_OTHER)
-#define CXGBE_RSS_HF_IPV6_MASK (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
-				ETH_RSS_NONFRAG_IPV6_OTHER | \
-				ETH_RSS_IPV6_EX)
-#define CXGBE_RSS_HF_TCP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_TCP | \
-				    ETH_RSS_IPV6_TCP_EX)
-#define CXGBE_RSS_HF_UDP_IPV6_MASK (ETH_RSS_NONFRAG_IPV6_UDP | \
-				    ETH_RSS_IPV6_UDP_EX)
-#define CXGBE_RSS_HF_ALL (ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP)
+#define CXGBE_RSS_HF_IPV4_MASK (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
+#define CXGBE_RSS_HF_IPV6_MASK (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+				RTE_ETH_RSS_IPV6_EX)
+#define CXGBE_RSS_HF_TCP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+				    RTE_ETH_RSS_IPV6_TCP_EX)
+#define CXGBE_RSS_HF_UDP_IPV6_MASK (RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+				    RTE_ETH_RSS_IPV6_UDP_EX)
+#define CXGBE_RSS_HF_ALL (RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP)
 
 /* Tx/Rx Offloads supported */
-#define CXGBE_TX_OFFLOADS (DEV_TX_OFFLOAD_VLAN_INSERT | \
-			   DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_TX_OFFLOAD_UDP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_CKSUM | \
-			   DEV_TX_OFFLOAD_TCP_TSO | \
-			   DEV_TX_OFFLOAD_MULTI_SEGS)
-
-#define CXGBE_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			   DEV_RX_OFFLOAD_IPV4_CKSUM | \
-			   DEV_RX_OFFLOAD_UDP_CKSUM | \
-			   DEV_RX_OFFLOAD_TCP_CKSUM | \
-			   DEV_RX_OFFLOAD_JUMBO_FRAME | \
-			   DEV_RX_OFFLOAD_SCATTER | \
-			   DEV_RX_OFFLOAD_RSS_HASH)
+#define CXGBE_TX_OFFLOADS (RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+			   RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+			   RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
+
+#define CXGBE_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM | \
+			   RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+			   RTE_ETH_RX_OFFLOAD_SCATTER | \
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 /* Devargs filtermode and filtermask representation */
 enum cxgbe_devargs_filter_mode_flags {
diff --git a/drivers/net/cxgbe/cxgbe_ethdev.c b/drivers/net/cxgbe/cxgbe_ethdev.c
index 177eca397600..78c1381fdb47 100644
--- a/drivers/net/cxgbe/cxgbe_ethdev.c
+++ b/drivers/net/cxgbe/cxgbe_ethdev.c
@@ -231,9 +231,9 @@ int cxgbe_dev_link_update(struct rte_eth_dev *eth_dev,
 	}
 
 	new_link.link_status = cxgbe_force_linkup(adapter) ?
-			       ETH_LINK_UP : pi->link_cfg.link_ok;
+			       RTE_ETH_LINK_UP : pi->link_cfg.link_ok;
 	new_link.link_autoneg = (lc->link_caps & FW_PORT_CAP32_ANEG) ? 1 : 0;
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	new_link.link_speed = t4_fwcap_to_speed(lc->link_caps);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
@@ -316,10 +316,10 @@ int cxgbe_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	/* set to jumbo mode if needed */
 	if (new_mtu > CXGBE_ETH_MAX_LEN)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	err = t4_set_rxmode(adapter, adapter->mbox, pi->viid, new_mtu, -1, -1,
 			    -1, -1, true);
@@ -396,7 +396,7 @@ int cxgbe_dev_start(struct rte_eth_dev *eth_dev)
 			goto out;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 	else
 		eth_dev->data->scattered_rx = 0;
@@ -460,9 +460,9 @@ int cxgbe_dev_configure(struct rte_eth_dev *eth_dev)
 
 	CXGBE_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!(adapter->flags & FW_QUEUE_BOUND)) {
 		err = cxgbe_setup_sge_fwevtq(adapter);
@@ -685,10 +685,10 @@ int cxgbe_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 	/* Set to jumbo mode if necessary */
 	if (pkt_len > CXGBE_ETH_MAX_LEN)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	err = t4_sge_alloc_rxq(adapter, &rxq->rspq, false, eth_dev, msi_idx,
 			       &rxq->fl, NULL,
@@ -1079,13 +1079,13 @@ static int cxgbe_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		rx_pause = 1;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1098,12 +1098,12 @@ static int cxgbe_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	u8 tx_pause = 0, rx_pause = 0;
 	int ret;
 
-	if (fc_conf->mode == RTE_FC_FULL) {
+	if (fc_conf->mode == RTE_ETH_FC_FULL) {
 		tx_pause = 1;
 		rx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE) {
 		tx_pause = 1;
-	} else if (fc_conf->mode == RTE_FC_RX_PAUSE) {
+	} else if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE) {
 		rx_pause = 1;
 	}
 
@@ -1199,9 +1199,9 @@ static int cxgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 		rss_hf |= CXGBE_RSS_HF_IPV6_MASK;
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN) {
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (flags & F_FW_RSS_VI_CONFIG_CMD_UDPEN)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	}
 
 	if (flags & F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN)
@@ -1245,8 +1245,8 @@ static int cxgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 
 	rte_memcpy(rss, pi->rss, pi->rss_size * sizeof(u16));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1276,8 +1276,8 @@ static int cxgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (!(reta_conf[idx].mask & (1ULL << shift)))
 			continue;
 
@@ -1478,7 +1478,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_100G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_100G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_100G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
 		}
@@ -1487,7 +1487,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_50G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_50G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_50G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER);
 		}
@@ -1496,7 +1496,7 @@ static int cxgbe_fec_get_capa_speed_to_fec(struct link_config *lc,
 
 	if (lc->pcaps & FW_PORT_CAP32_SPEED_25G) {
 		if (capa_arr) {
-			capa_arr[num].speed = ETH_SPEED_NUM_25G;
+			capa_arr[num].speed = RTE_ETH_SPEED_NUM_25G;
 			capa_arr[num].capa = RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 					     RTE_ETH_FEC_MODE_CAPA_MASK(RS);
diff --git a/drivers/net/cxgbe/cxgbe_main.c b/drivers/net/cxgbe/cxgbe_main.c
index 6dd1bf1f836e..54723edc2144 100644
--- a/drivers/net/cxgbe/cxgbe_main.c
+++ b/drivers/net/cxgbe/cxgbe_main.c
@@ -1671,7 +1671,7 @@ int cxgbe_link_start(struct port_info *pi)
 	 * that step explicitly.
 	 */
 	ret = t4_set_rxmode(adapter, adapter->mbox, pi->viid, mtu, -1, -1, -1,
-			    !!(conf_offloads & DEV_RX_OFFLOAD_VLAN_STRIP),
+			    !!(conf_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP),
 			    true);
 	if (ret == 0) {
 		ret = cxgbe_mpstcam_modify(pi, (int)pi->xact_addr_filt,
@@ -1695,7 +1695,7 @@ int cxgbe_link_start(struct port_info *pi)
 	}
 
 	if (ret == 0 && cxgbe_force_linkup(adapter))
-		pi->eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		pi->eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return ret;
 }
 
@@ -1726,10 +1726,10 @@ int cxgbe_write_rss_conf(const struct port_info *pi, uint64_t rss_hf)
 	if (rss_hf & CXGBE_RSS_HF_IPV4_MASK)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4TWOTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		flags |= F_FW_RSS_VI_CONFIG_CMD_IP4FOURTUPEN |
 			 F_FW_RSS_VI_CONFIG_CMD_UDPEN;
 
@@ -1866,7 +1866,7 @@ static void fw_caps_to_speed_caps(enum fw_port_type port_type,
 {
 #define SET_SPEED(__speed_name) \
 	do { \
-		*speed_caps |= ETH_LINK_ ## __speed_name; \
+		*speed_caps |= RTE_ETH_LINK_ ## __speed_name; \
 	} while (0)
 
 #define FW_CAPS_TO_SPEED(__fw_name) \
@@ -1953,7 +1953,7 @@ void cxgbe_get_speed_caps(struct port_info *pi, u32 *speed_caps)
 			      speed_caps);
 
 	if (!(pi->link_cfg.pcaps & FW_PORT_CAP32_ANEG))
-		*speed_caps |= ETH_LINK_SPEED_FIXED;
+		*speed_caps |= RTE_ETH_LINK_SPEED_FIXED;
 }
 
 /**
diff --git a/drivers/net/cxgbe/sge.c b/drivers/net/cxgbe/sge.c
index e5f7721dc4b3..eddb818c4861 100644
--- a/drivers/net/cxgbe/sge.c
+++ b/drivers/net/cxgbe/sge.c
@@ -366,7 +366,7 @@ static unsigned int refill_fl_usembufs(struct adapter *adap, struct sge_fl *q,
 	int ret, i;
 	struct rte_pktmbuf_pool_private *mbp_priv;
 	u8 jumbo_en = rxq->rspq.eth_dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_JUMBO_FRAME;
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* Use jumbo mtu buffers if mbuf data room size can fit jumbo data. */
 	mbp_priv = rte_mempool_get_priv(rxq->rspq.mb_pool);
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index 27d670f843d2..c466256137a3 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -54,30 +54,30 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* Keep track of whether QMAN and BMAN have been globally initialized */
 static int is_global_init;
@@ -189,10 +189,10 @@ dpaa_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > DPAA_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
@@ -238,7 +238,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 		tx_offloads, dev_tx_offloads_nodis);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		uint32_t max_len;
 
 		DPAA_PMD_DEBUG("enabling jumbo");
@@ -259,7 +259,7 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 			- RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN - VLAN_TAG_SIZE;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		DPAA_PMD_DEBUG("enabling scatter mode");
 		fman_if_set_sg(dev->process_private, 1);
 		dev->data->scattered_rx = 1;
@@ -304,43 +304,43 @@ dpaa_eth_dev_configure(struct rte_eth_dev *dev)
 
 	/* Configure link only if link is UP*/
 	if (link->link_status) {
-		if (eth_conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
+		if (eth_conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 			/* Start autoneg only if link is not in autoneg mode */
 			if (!link->link_autoneg)
 				dpaa_restart_link_autoneg(__fif->node_name);
-		} else if (eth_conf->link_speeds & ETH_LINK_SPEED_FIXED) {
-			switch (eth_conf->link_speeds & ~ETH_LINK_SPEED_FIXED) {
-			case ETH_LINK_SPEED_10M_HD:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+		} else if (eth_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
+			switch (eth_conf->link_speeds &  RTE_ETH_LINK_SPEED_FIXED) {
+			case RTE_ETH_LINK_SPEED_10M_HD:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10M:
-				speed = ETH_SPEED_NUM_10M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10M:
+				speed = RTE_ETH_SPEED_NUM_10M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M_HD:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_HALF_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M_HD:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_HALF_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_100M:
-				speed = ETH_SPEED_NUM_100M;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_100M:
+				speed = RTE_ETH_SPEED_NUM_100M;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_1G:
-				speed = ETH_SPEED_NUM_1G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_1G:
+				speed = RTE_ETH_SPEED_NUM_1G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_2_5G:
-				speed = ETH_SPEED_NUM_2_5G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_2_5G:
+				speed = RTE_ETH_SPEED_NUM_2_5G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
-			case ETH_LINK_SPEED_10G:
-				speed = ETH_SPEED_NUM_10G;
-				duplex = ETH_LINK_FULL_DUPLEX;
+			case RTE_ETH_LINK_SPEED_10G:
+				speed = RTE_ETH_SPEED_NUM_10G;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			default:
-				speed = ETH_SPEED_NUM_NONE;
-				duplex = ETH_LINK_FULL_DUPLEX;
+				speed = RTE_ETH_SPEED_NUM_NONE;
+				duplex = RTE_ETH_LINK_FULL_DUPLEX;
 				break;
 			}
 			/* Set link speed */
@@ -556,30 +556,30 @@ static int dpaa_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = DPAA_MAX_MAC_FILTER;
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA_RSS_OFFLOAD_ALL;
 
 	if (fif->mac_type == fman_mac_1g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G;
 	} else if (fif->mac_type == fman_mac_2_5g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G;
 	} else if (fif->mac_type == fman_mac_10g) {
-		dev_info->speed_capa = ETH_LINK_SPEED_10M_HD
-					| ETH_LINK_SPEED_10M
-					| ETH_LINK_SPEED_100M_HD
-					| ETH_LINK_SPEED_100M
-					| ETH_LINK_SPEED_1G
-					| ETH_LINK_SPEED_2_5G
-					| ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD
+					| RTE_ETH_LINK_SPEED_10M
+					| RTE_ETH_LINK_SPEED_100M_HD
+					| RTE_ETH_LINK_SPEED_100M
+					| RTE_ETH_LINK_SPEED_1G
+					| RTE_ETH_LINK_SPEED_2_5G
+					| RTE_ETH_LINK_SPEED_10G;
 	} else {
 		DPAA_PMD_ERR("invalid link_speed: %s, %d",
 			     dpaa_intf->name, fif->mac_type);
@@ -612,13 +612,13 @@ dpaa_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"},
-			{DEV_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"}
+			{RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"},
+			{RTE_ETH_RX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"}
 	};
 
 	/* Update Rx offload info */
@@ -645,14 +645,14 @@ dpaa_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -686,7 +686,7 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 			ret = dpaa_get_link_status(__fif->node_name, link);
 			if (ret)
 				return ret;
-			if (link->link_status == ETH_LINK_DOWN &&
+			if (link->link_status == RTE_ETH_LINK_DOWN &&
 			    wait_to_complete)
 				rte_delay_ms(CHECK_INTERVAL);
 			else
@@ -697,15 +697,15 @@ static int dpaa_eth_link_update(struct rte_eth_dev *dev,
 	}
 
 	if (ioctl_version < 2) {
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
-		link->link_autoneg = ETH_LINK_AUTONEG;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 		if (fif->mac_type == fman_mac_1g)
-			link->link_speed = ETH_SPEED_NUM_1G;
+			link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		else if (fif->mac_type == fman_mac_2_5g)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else if (fif->mac_type == fman_mac_10g)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
 			DPAA_PMD_ERR("invalid link_speed: %s, %d",
 				     dpaa_intf->name, fif->mac_type);
@@ -981,7 +981,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len <= buffsz) {
 		;
 	} else if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SCATTER) {
+			RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (dev->data->dev_conf.rxmode.max_rx_pkt_len >
 			buffsz * DPAA_SGT_MAX_ENTRIES) {
 			DPAA_PMD_ERR("max RxPkt size %d too big to fit "
@@ -1303,7 +1303,7 @@ static int dpaa_link_down(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_DOWN);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_DOWN);
 	else
 		return dpaa_eth_dev_stop(dev);
 	return 0;
@@ -1319,7 +1319,7 @@ static int dpaa_link_up(struct rte_eth_dev *dev)
 	__fif = container_of(fif, struct __fman_if, __if);
 
 	if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC)
-		dpaa_update_link_status(__fif->node_name, ETH_LINK_UP);
+		dpaa_update_link_status(__fif->node_name, RTE_ETH_LINK_UP);
 	else
 		dpaa_eth_dev_start(dev);
 	return 0;
@@ -1349,10 +1349,10 @@ dpaa_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	if (fc_conf->mode == RTE_FC_NONE) {
+	if (fc_conf->mode == RTE_ETH_FC_NONE) {
 		return 0;
-	} else if (fc_conf->mode == RTE_FC_TX_PAUSE ||
-		 fc_conf->mode == RTE_FC_FULL) {
+	} else if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE ||
+		 fc_conf->mode == RTE_ETH_FC_FULL) {
 		fman_if_set_fc_threshold(dev->process_private,
 					 fc_conf->high_water,
 					 fc_conf->low_water,
@@ -1396,11 +1396,11 @@ dpaa_flow_ctrl_get(struct rte_eth_dev *dev,
 	}
 	ret = fman_if_get_fc_threshold(dev->process_private);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time =
 			fman_if_get_fc_quanta(dev->process_private);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -1663,10 +1663,10 @@ static int dpaa_fc_set_default(struct dpaa_if *dpaa_intf,
 	fc_conf = dpaa_intf->fc_conf;
 	ret = fman_if_get_fc_threshold(fman_intf);
 	if (ret) {
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		fc_conf->pause_time = fman_if_get_fc_quanta(fman_intf);
 	} else {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
diff --git a/drivers/net/dpaa/dpaa_ethdev.h b/drivers/net/dpaa/dpaa_ethdev.h
index b5728e09c29f..c868e9d5bd9b 100644
--- a/drivers/net/dpaa/dpaa_ethdev.h
+++ b/drivers/net/dpaa/dpaa_ethdev.h
@@ -74,11 +74,11 @@
 #define DPAA_DEBUG_FQ_TX_ERROR   1
 
 #define DPAA_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP)
 
 #define DPAA_TX_CKSUM_OFFLOAD_MASK (             \
 		PKT_TX_IP_CKSUM |                \
diff --git a/drivers/net/dpaa/dpaa_flow.c b/drivers/net/dpaa/dpaa_flow.c
index c5b5ec869519..1ccd03602790 100644
--- a/drivers/net/dpaa/dpaa_flow.c
+++ b/drivers/net/dpaa/dpaa_flow.c
@@ -394,7 +394,7 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1U << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_L2_PAYLOAD:
 
 				if (l2_configured)
 					break;
@@ -404,9 +404,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_ETH;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
 
 				if (ipv4_configured)
 					break;
@@ -415,10 +415,10 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV4;
 				break;
 
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (ipv6_configured)
 					break;
@@ -427,9 +427,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_IPV6;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
 
 				if (tcp_configured)
 					break;
@@ -438,9 +438,9 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_TCP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (udp_configured)
 					break;
@@ -449,8 +449,8 @@ static void set_dist_units(ioc_fm_pcd_net_env_params_t *dist_units,
 					= HEADER_TYPE_UDP;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
index 641e7027f12e..7c92b2a42e3f 100644
--- a/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
+++ b/drivers/net/dpaa2/base/dpaa2_hw_dpni.c
@@ -216,7 +216,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 		if (req_dist_set % 2 != 0) {
 			dist_field = 1ULL << loop;
 			switch (dist_field) {
-			case ETH_RSS_L2_PAYLOAD:
+			case RTE_ETH_RSS_L2_PAYLOAD:
 
 				if (l2_configured)
 					break;
@@ -233,7 +233,7 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 			break;
 
-			case ETH_RSS_MPLS:
+			case RTE_ETH_RSS_MPLS:
 
 				if (mpls_configured)
 					break;
@@ -270,13 +270,13 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_IPV4:
-			case ETH_RSS_FRAG_IPV4:
-			case ETH_RSS_NONFRAG_IPV4_OTHER:
-			case ETH_RSS_IPV6:
-			case ETH_RSS_FRAG_IPV6:
-			case ETH_RSS_NONFRAG_IPV6_OTHER:
-			case ETH_RSS_IPV6_EX:
+			case RTE_ETH_RSS_IPV4:
+			case RTE_ETH_RSS_FRAG_IPV4:
+			case RTE_ETH_RSS_NONFRAG_IPV4_OTHER:
+			case RTE_ETH_RSS_IPV6:
+			case RTE_ETH_RSS_FRAG_IPV6:
+			case RTE_ETH_RSS_NONFRAG_IPV6_OTHER:
+			case RTE_ETH_RSS_IPV6_EX:
 
 				if (l3_configured)
 					break;
@@ -314,12 +314,12 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 			break;
 
-			case ETH_RSS_NONFRAG_IPV4_TCP:
-			case ETH_RSS_NONFRAG_IPV6_TCP:
-			case ETH_RSS_NONFRAG_IPV4_UDP:
-			case ETH_RSS_NONFRAG_IPV6_UDP:
-			case ETH_RSS_IPV6_TCP_EX:
-			case ETH_RSS_IPV6_UDP_EX:
+			case RTE_ETH_RSS_NONFRAG_IPV4_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_TCP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_UDP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_UDP:
+			case RTE_ETH_RSS_IPV6_TCP_EX:
+			case RTE_ETH_RSS_IPV6_UDP_EX:
 
 				if (l4_configured)
 					break;
@@ -346,8 +346,8 @@ dpaa2_distset_to_dpkg_profile_cfg(
 				i++;
 				break;
 
-			case ETH_RSS_NONFRAG_IPV4_SCTP:
-			case ETH_RSS_NONFRAG_IPV6_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV4_SCTP:
+			case RTE_ETH_RSS_NONFRAG_IPV6_SCTP:
 
 				if (sctp_configured)
 					break;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c12169578e22..23bb985b95e9 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -38,34 +38,34 @@
 
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_CHECKSUM |
-		DEV_RX_OFFLOAD_SCTP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_TIMESTAMP;
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 /* Rx offloads which cannot be disabled */
 static uint64_t dev_rx_offloads_nodis =
-		DEV_RX_OFFLOAD_RSS_HASH |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_MT_LOCKFREE |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MT_LOCKFREE |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 /* Tx offloads which cannot be disabled */
 static uint64_t dev_tx_offloads_nodis =
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 /* enable timestamp in mbuf */
 bool dpaa2_enable_ts[RTE_MAX_ETHPORTS];
@@ -143,7 +143,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN Filter not avaialble */
 		if (!priv->max_vlan_filters) {
 			DPAA2_PMD_INFO("VLAN filter not available");
@@ -151,7 +151,7 @@ dpaa2_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 
 		if (dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_VLAN_FILTER)
+			RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = dpni_enable_vlan_filter(dpni, CMD_PRI_LOW,
 						      priv->token, true);
 		else
@@ -252,13 +252,13 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 					dev_rx_offloads_nodis;
 	dev_info->tx_offload_capa = dev_tx_offloads_sup |
 					dev_tx_offloads_nodis;
-	dev_info->speed_capa = ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	dev_info->max_hash_mac_addrs = 0;
 	dev_info->max_vfs = 0;
-	dev_info->max_vmdq_pools = ETH_16_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	dev_info->flow_type_rss_offloads = DPAA2_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxportconf.burst_size = dpaa2_dqrr_size;
@@ -271,10 +271,10 @@ dpaa2_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->default_rxportconf.ring_size = DPAA2_RX_DEFAULT_NBDESC;
 
 	if (dpaa2_svr_family == SVR_LX2160A) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G |
-				ETH_LINK_SPEED_40G |
-				ETH_LINK_SPEED_50G |
-				ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G |
+				RTE_ETH_LINK_SPEED_40G |
+				RTE_ETH_LINK_SPEED_50G |
+				RTE_ETH_LINK_SPEED_100G;
 	}
 
 	return 0;
@@ -292,16 +292,16 @@ dpaa2_dev_rx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} rx_offload_map[] = {
-			{DEV_RX_OFFLOAD_CHECKSUM, " Checksum,"},
-			{DEV_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
-			{DEV_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
-			{DEV_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
-			{DEV_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
-			{DEV_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
-			{DEV_RX_OFFLOAD_RSS_HASH, " RSS,"},
-			{DEV_RX_OFFLOAD_SCATTER, " Scattered,"}
+			{RTE_ETH_RX_OFFLOAD_CHECKSUM, " Checksum,"},
+			{RTE_ETH_RX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM, " Outer UDP csum,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_STRIP, " VLAN strip,"},
+			{RTE_ETH_RX_OFFLOAD_VLAN_FILTER, " VLAN filter,"},
+			{RTE_ETH_RX_OFFLOAD_JUMBO_FRAME, " Jumbo frame,"},
+			{RTE_ETH_RX_OFFLOAD_TIMESTAMP, " Timestamp,"},
+			{RTE_ETH_RX_OFFLOAD_RSS_HASH, " RSS,"},
+			{RTE_ETH_RX_OFFLOAD_SCATTER, " Scattered,"}
 	};
 
 	/* Update Rx offload info */
@@ -328,15 +328,15 @@ dpaa2_dev_tx_burst_mode_get(struct rte_eth_dev *dev,
 		uint64_t flags;
 		const char *output;
 	} tx_offload_map[] = {
-			{DEV_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
-			{DEV_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
-			{DEV_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
-			{DEV_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
-			{DEV_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
-			{DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
-			{DEV_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
-			{DEV_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
-			{DEV_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
+			{RTE_ETH_TX_OFFLOAD_VLAN_INSERT, " VLAN Insert,"},
+			{RTE_ETH_TX_OFFLOAD_IPV4_CKSUM, " IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_UDP_CKSUM, " UDP csum,"},
+			{RTE_ETH_TX_OFFLOAD_TCP_CKSUM, " TCP csum,"},
+			{RTE_ETH_TX_OFFLOAD_SCTP_CKSUM, " SCTP csum,"},
+			{RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM, " Outer IPV4 csum,"},
+			{RTE_ETH_TX_OFFLOAD_MT_LOCKFREE, " MT lockfree,"},
+			{RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE, " MBUF free disable,"},
+			{RTE_ETH_TX_OFFLOAD_MULTI_SEGS, " Scattered,"}
 	};
 
 	/* Update Tx offload info */
@@ -559,7 +559,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		tx_offloads, dev_tx_offloads_nodis);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (eth_conf->rxmode.max_rx_pkt_len <= DPAA2_MAX_RX_PKT_LEN) {
 			ret = dpni_set_max_frame_length(dpni, CMD_PRI_LOW,
 				priv->token, eth_conf->rxmode.max_rx_pkt_len
@@ -578,7 +578,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (eth_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		for (tc_index = 0; tc_index < priv->num_rx_tc; tc_index++) {
 			ret = dpaa2_setup_flow_dist(dev,
 					eth_conf->rx_adv_conf.rss_conf.rss_hf,
@@ -592,12 +592,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rx_l3_csum_offload = true;
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_UDP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) ||
-		(rx_offloads & DEV_RX_OFFLOAD_SCTP_CKSUM))
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) ||
+		(rx_offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM))
 		rx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -615,7 +615,7 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 	}
 
 #if !defined(RTE_LIBRTE_IEEE1588)
-	if (rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 #endif
 	{
 		ret = rte_mbuf_dyn_rx_timestamp_register(
@@ -628,12 +628,12 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		dpaa2_enable_ts[dev->data->port_id] = true;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		tx_l3_csum_offload = true;
 
-	if ((tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-		(tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM))
+	if ((tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+		(tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM))
 		tx_l4_csum_offload = true;
 
 	ret = dpni_set_offload(dpni, CMD_PRI_LOW, priv->token,
@@ -665,8 +665,8 @@ dpaa2_eth_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		dpaa2_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		dpaa2_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 
 	dpaa2_tm_init(dev);
 
@@ -1477,10 +1477,10 @@ dpaa2_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > DPAA2_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
@@ -1881,7 +1881,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 			DPAA2_PMD_DEBUG("error: dpni_get_link_state %d", ret);
 			return -1;
 		}
-		if (state.up == ETH_LINK_DOWN &&
+		if (state.up == RTE_ETH_LINK_DOWN &&
 		    wait_to_complete)
 			rte_delay_ms(CHECK_INTERVAL);
 		else
@@ -1893,9 +1893,9 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
 	link.link_speed = state.rate;
 
 	if (state.options & DPNI_LINK_OPT_HALF_DUPLEX)
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	ret = rte_eth_linkstatus_set(dev, &link);
 	if (ret == -1)
@@ -2056,9 +2056,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	No TX side flow control (send Pause frame disabled)
 		 */
 		if (!(state.options & DPNI_LINK_OPT_ASYM_PAUSE))
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_RX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	} else {
 		/* DPNI_LINK_OPT_PAUSE not set
 		 *  if ASYM_PAUSE set,
@@ -2068,9 +2068,9 @@ dpaa2_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 *	Flow control disabled
 		 */
 		if (state.options & DPNI_LINK_OPT_ASYM_PAUSE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return ret;
@@ -2114,14 +2114,14 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	/* update cfg with fc_conf */
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		/* Full flow control;
 		 * OPT_PAUSE set, ASYM_PAUSE not set
 		 */
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		/* Enable RX flow control
 		 * OPT_PAUSE not set;
 		 * ASYM_PAUSE set;
@@ -2129,7 +2129,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		cfg.options &= ~DPNI_LINK_OPT_PAUSE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		/* Enable TX Flow control
 		 * OPT_PAUSE set
 		 * ASYM_PAUSE set
@@ -2137,7 +2137,7 @@ dpaa2_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		cfg.options |= DPNI_LINK_OPT_PAUSE;
 		cfg.options |= DPNI_LINK_OPT_ASYM_PAUSE;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		/* Disable Flow control
 		 * OPT_PAUSE not set
 		 * ASYM_PAUSE not set
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h
index b9c729f6cdc0..ca75a2175524 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.h
+++ b/drivers/net/dpaa2/dpaa2_ethdev.h
@@ -65,12 +65,12 @@
 #define DPAA2_TX_CONF_ENABLE	0x08
 
 #define DPAA2_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IP | \
-	ETH_RSS_UDP | \
-	ETH_RSS_TCP | \
-	ETH_RSS_SCTP | \
-	ETH_RSS_MPLS)
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IP | \
+	RTE_ETH_RSS_UDP | \
+	RTE_ETH_RSS_TCP | \
+	RTE_ETH_RSS_SCTP | \
+	RTE_ETH_RSS_MPLS)
 
 /* LX2 FRC Parsed values (Little Endian) */
 #define DPAA2_PKT_TYPE_ETHER		0x0060
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index f40369e2c3f9..7c77243b5d1a 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -773,7 +773,7 @@ dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 #endif
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP)
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			rte_vlan_strip(bufs[num_rx]);
 
 		dq_storage++;
@@ -987,7 +987,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 							eth_data->port_id);
 
 		if (eth_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_VLAN_STRIP) {
+				RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			rte_vlan_strip(bufs[num_rx]);
 		}
 
@@ -1230,7 +1230,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 					if (unlikely(((*bufs)->ol_flags
 						& PKT_TX_VLAN_PKT) ||
 						(eth_data->dev_conf.txmode.offloads
-						& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+						& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 						ret = rte_vlan_insert(bufs);
 						if (ret)
 							goto send_n_return;
@@ -1273,7 +1273,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 			if (unlikely(((*bufs)->ol_flags & PKT_TX_VLAN_PKT) ||
 				(eth_data->dev_conf.txmode.offloads
-				& DEV_TX_OFFLOAD_VLAN_INSERT))) {
+				& RTE_ETH_TX_OFFLOAD_VLAN_INSERT))) {
 				int ret = rte_vlan_insert(bufs);
 				if (ret)
 					goto send_n_return;
diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h
index 3b4d9c3ee6f4..ca488fea966f 100644
--- a/drivers/net/e1000/e1000_ethdev.h
+++ b/drivers/net/e1000/e1000_ethdev.h
@@ -81,15 +81,15 @@
 #define E1000_FTQF_QUEUE_ENABLE          0x00000100
 
 #define IGB_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 /*
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index a0ca371b0275..6fb205f8577f 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -599,8 +599,8 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	e1000_clear_hw_cntrs_base_generic(hw);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_em_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to update vlan offload");
@@ -613,39 +613,39 @@ eth_em_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -1104,9 +1104,9 @@ eth_em_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_mtu_seg_max = EM_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	/* Preferred queue parameters */
 	dev_info->default_rxportconf.nb_queues = 1;
@@ -1164,17 +1164,17 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -1426,15 +1426,15 @@ eth_em_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			em_vlan_hw_strip_enable(dev);
 		else
 			em_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			em_vlan_hw_filter_enable(dev);
 		else
 			em_vlan_hw_filter_disable(dev);
@@ -1603,7 +1603,7 @@ eth_em_interrupt_action(struct rte_eth_dev *dev,
 	if (link.link_status) {
 		PMD_INIT_LOG(INFO, " Port %d: Link Up - speed %u Mbps - %s",
 			     dev->data->port_id, link.link_speed,
-			     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			     "full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down", dev->data->port_id);
@@ -1685,13 +1685,13 @@ eth_em_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -1820,11 +1820,11 @@ eth_em_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* switch to jumbo mode if needed */
 	if (frame_size > E1000_ETH_MAX_LEN) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl |= E1000_RCTL_LPE;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl &= ~E1000_RCTL_LPE;
 	}
 	E1000_WRITE_REG(hw, E1000_RCTL, rctl);
diff --git a/drivers/net/e1000/em_rxtx.c b/drivers/net/e1000/em_rxtx.c
index dfd8f2fd0074..cf672c32277b 100644
--- a/drivers/net/e1000/em_rxtx.c
+++ b/drivers/net/e1000/em_rxtx.c
@@ -93,7 +93,7 @@ struct em_rx_queue {
 	struct em_rx_entry *sw_ring;   /**< address of RX software ring. */
 	struct rte_mbuf *pkt_first_seg; /**< First segment of current packet. */
 	struct rte_mbuf *pkt_last_seg;  /**< Last segment of current packet. */
-	uint64_t	    offloads;   /**< Offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< Offloads of RTE_ETH_RX_OFFLOAD_* */
 	uint16_t            nb_rx_desc; /**< number of RX descriptors. */
 	uint16_t            rx_tail;    /**< current value of RDT register. */
 	uint16_t            nb_rx_hold; /**< number of held free RX desc. */
@@ -172,7 +172,7 @@ struct em_tx_queue {
 	uint8_t                wthresh;  /**< Write-back threshold register. */
 	struct em_ctx_info ctx_cache;
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 #if 1
@@ -1168,11 +1168,11 @@ em_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	RTE_SET_USED(dev);
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	return tx_offload_capa;
 }
@@ -1367,15 +1367,15 @@ em_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 	max_rx_pktlen = em_get_max_pktlen(dev);
 
 	rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP  |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		DEV_RX_OFFLOAD_UDP_CKSUM   |
-		DEV_RX_OFFLOAD_TCP_CKSUM   |
-		DEV_RX_OFFLOAD_KEEP_CRC    |
-		DEV_RX_OFFLOAD_SCATTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		RTE_ETH_RX_OFFLOAD_SCATTER;
 	if (max_rx_pktlen > RTE_ETHER_MAX_LEN)
-		rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	return rx_offload_capa;
 }
@@ -1468,7 +1468,7 @@ eth_em_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->rx_free_thresh = rx_conf->rx_free_thresh;
 	rxq->queue_id = queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1806,7 +1806,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -1839,7 +1839,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		 * to avoid splitting packets that don't fit into
 		 * one buffer.
 		 */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME ||
 				rctl_bsize < RTE_ETHER_MAX_LEN) {
 			if (!dev->data->scattered_rx)
 				PMD_INIT_LOG(DEBUG, "forcing scatter mode");
@@ -1849,7 +1849,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_em_recv_scattered_pkts;
@@ -1862,7 +1862,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = E1000_READ_REG(hw, E1000_RXCSUM);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
@@ -1874,21 +1874,21 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	if ((hw->mac.type == e1000_ich9lan ||
 			hw->mac.type == e1000_pch2lan ||
 			hw->mac.type == e1000_ich10lan) &&
-			rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+			rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		u32 rxdctl = E1000_READ_REG(hw, E1000_RXDCTL(0));
 		E1000_WRITE_REG(hw, E1000_RXDCTL(0), rxdctl | 3);
 		E1000_WRITE_REG(hw, E1000_ERT, 0x100 | (1 << 13));
 	}
 
 	if (hw->mac.type == e1000_pch2lan) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 			e1000_lv_jumbo_workaround_ich8lan(hw, TRUE);
 		else
 			e1000_lv_jumbo_workaround_ich8lan(hw, FALSE);
 	}
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= E1000_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1908,7 +1908,7 @@ eth_em_rx_init(struct rte_eth_dev *dev)
 	/*
 	 * Configure support of jumbo frames, if any.
 	 */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		rctl |= E1000_RCTL_LPE;
 	else
 		rctl &= ~E1000_RCTL_LPE;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 10ee0f33415a..03509c960326 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -1082,21 +1082,21 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 	uint16_t nb_rx_q = dev->data->nb_rx_queues;
 	uint16_t nb_tx_q = dev->data->nb_tx_queues;
 
-	if ((rx_mq_mode & ETH_MQ_RX_DCB_FLAG) ||
-	    tx_mq_mode == ETH_MQ_TX_DCB ||
-	    tx_mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_DCB ||
+	    tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* Check multi-queue mode.
-		 * To no break software we accept ETH_MQ_RX_NONE as this might
+		 * To no break software we accept RTE_ETH_MQ_RX_NONE as this might
 		 * be used to turn off VLAN filter.
 		 */
 
-		if (rx_mq_mode == ETH_MQ_RX_NONE ||
-		    rx_mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+		if (rx_mq_mode == RTE_ETH_MQ_RX_NONE ||
+		    rx_mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool = 1;
 		} else {
 			/* Only support one queue on VFs.
@@ -1108,12 +1108,12 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 		/* TX mode is not used here, so mode might be ignored.*/
-		if (tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(WARNING, "SRIOV is active,"
 					" TX mode %d is not supported. "
 					" Driver will behave as %d mode.",
-					tx_mq_mode, ETH_MQ_TX_VMDQ_ONLY);
+					tx_mq_mode, RTE_ETH_MQ_TX_VMDQ_ONLY);
 		}
 
 		/* check valid queue number */
@@ -1126,17 +1126,17 @@ igb_check_mq_mode(struct rte_eth_dev *dev)
 		/* To no break software that set invalid mode, only display
 		 * warning if invalid mode is used.
 		 */
-		if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		    rx_mq_mode != ETH_MQ_RX_VMDQ_ONLY &&
-		    rx_mq_mode != ETH_MQ_RX_RSS) {
+		if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY &&
+		    rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 			/* RSS together with VMDq not supported*/
 			PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				     rx_mq_mode);
 			return -EINVAL;
 		}
 
-		if (tx_mq_mode != ETH_MQ_TX_NONE &&
-		    tx_mq_mode != ETH_MQ_TX_VMDQ_ONLY) {
+		if (tx_mq_mode != RTE_ETH_MQ_TX_NONE &&
+		    tx_mq_mode != RTE_ETH_MQ_TX_VMDQ_ONLY) {
 			PMD_INIT_LOG(WARNING, "TX mode %d is not supported."
 					" Due to txmode is meaningless in this"
 					" driver, just ignore.",
@@ -1155,8 +1155,8 @@ eth_igb_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = igb_check_mq_mode(dev);
@@ -1296,8 +1296,8 @@ eth_igb_start(struct rte_eth_dev *dev)
 	/*
 	 * VLAN Offload Settings
 	 */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK | \
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = eth_igb_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set vlan offload");
@@ -1305,7 +1305,7 @@ eth_igb_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable VLAN filter since VMDq always use VLAN filter */
 		igb_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1319,39 +1319,39 @@ eth_igb_start(struct rte_eth_dev *dev)
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = E1000_ALL_SPEED_DUPLEX;
 		hw->mac.autoneg = 1;
 	} else {
 		num_speeds = 0;
-		autoneg = (*speeds & ETH_LINK_SPEED_FIXED) == 0;
+		autoneg = (*speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		/* Reset */
 		hw->phy.autoneg_advertised = 0;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_FIXED)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_FIXED)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
@@ -2194,21 +2194,21 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	case e1000_82576:
 		dev_info->max_rx_queues = 16;
 		dev_info->max_tx_queues = 16;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 16;
 		break;
 
 	case e1000_82580:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
 	case e1000_i350:
 		dev_info->max_rx_queues = 8;
 		dev_info->max_tx_queues = 8;
-		dev_info->max_vmdq_pools = ETH_8_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_8_POOLS;
 		dev_info->vmdq_queue_num = 8;
 		break;
 
@@ -2234,7 +2234,7 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		return -EINVAL;
 	}
 	dev_info->hash_key_size = IGB_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGB_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -2260,9 +2260,9 @@ eth_igb_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - E1000_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -2305,12 +2305,12 @@ eth_igbvf_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_rx_bufsize = 256; /* See BSIZE field of RCTL register. */
 	dev_info->max_rx_pktlen  = 0x3FFF; /* See RLPML register. */
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	switch (hw->mac.type) {
 	case e1000_vfadapt:
 		dev_info->max_rx_queues = 2;
@@ -2411,17 +2411,17 @@ eth_igb_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 	} else if (!link_check) {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2597,7 +2597,7 @@ eth_igb_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= E1000_CTRL_EXT_EXT_VLAN;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (qinq && vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (qinq && vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg = E1000_READ_REG(hw, E1000_VET);
 		reg = (reg & (~E1000_VET_VET_EXT)) |
 			((uint32_t)tpid << E1000_VET_VET_EXT_SHIFT);
@@ -2686,7 +2686,7 @@ igb_vlan_hw_extend_disable(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
 
 	/* Update maximum packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		E1000_WRITE_REG(hw, E1000_RLPML,
 				dev->data->dev_conf.rxmode.max_rx_pkt_len);
 }
@@ -2704,7 +2704,7 @@ igb_vlan_hw_extend_enable(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_CTRL_EXT, reg);
 
 	/* Update maximum packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		E1000_WRITE_REG(hw, E1000_RLPML,
 			dev->data->dev_conf.rxmode.max_rx_pkt_len +
 						VLAN_TAG_SIZE);
@@ -2716,22 +2716,22 @@ eth_igb_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if(mask & ETH_VLAN_STRIP_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igb_vlan_hw_strip_enable(dev);
 		else
 			igb_vlan_hw_strip_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_FILTER_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igb_vlan_hw_filter_enable(dev);
 		else
 			igb_vlan_hw_filter_disable(dev);
 	}
 
-	if(mask & ETH_VLAN_EXTEND_MASK){
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			igb_vlan_hw_extend_enable(dev);
 		else
 			igb_vlan_hw_extend_disable(dev);
@@ -2883,7 +2883,7 @@ eth_igb_interrupt_action(struct rte_eth_dev *dev,
 				     " Port %d: Link Up - speed %u Mbps - %s",
 				     dev->data->port_id,
 				     (unsigned)link.link_speed,
-				     link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				     link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				     "full-duplex" : "half-duplex");
 		} else {
 			PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3037,13 +3037,13 @@ eth_igb_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3112,18 +3112,18 @@ eth_igb_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		 * on configuration
 		 */
 		switch (fc_conf->mode) {
-		case RTE_FC_NONE:
+		case RTE_ETH_FC_NONE:
 			ctrl &= ~E1000_CTRL_RFCE & ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_RX_PAUSE:
+		case RTE_ETH_FC_RX_PAUSE:
 			ctrl |= E1000_CTRL_RFCE;
 			ctrl &= ~E1000_CTRL_TFCE;
 			break;
-		case RTE_FC_TX_PAUSE:
+		case RTE_ETH_FC_TX_PAUSE:
 			ctrl |= E1000_CTRL_TFCE;
 			ctrl &= ~E1000_CTRL_RFCE;
 			break;
-		case RTE_FC_FULL:
+		case RTE_ETH_FC_FULL:
 			ctrl |= E1000_CTRL_RFCE | E1000_CTRL_TFCE;
 			break;
 		default:
@@ -3271,22 +3271,22 @@ igbvf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_E1000_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -3584,16 +3584,16 @@ eth_igb_rss_reta_update(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
@@ -3625,16 +3625,16 @@ eth_igb_rss_reta_query(struct rte_eth_dev *dev,
 	uint16_t idx, shift;
 	struct e1000_hw *hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += IGB_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IGB_4_BIT_MASK);
 		if (!mask)
@@ -4407,11 +4407,11 @@ eth_igb_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* switch to jumbo mode if needed */
 	if (frame_size > E1000_ETH_MAX_LEN) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl |= E1000_RCTL_LPE;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl &= ~E1000_RCTL_LPE;
 	}
 	E1000_WRITE_REG(hw, E1000_RCTL, rctl);
diff --git a/drivers/net/e1000/igb_pf.c b/drivers/net/e1000/igb_pf.c
index 2ce74dd5a9a5..fe355ef6b3b5 100644
--- a/drivers/net/e1000/igb_pf.c
+++ b/drivers/net/e1000/igb_pf.c
@@ -88,7 +88,7 @@ void igb_pf_host_init(struct rte_eth_dev *eth_dev)
 	if (*vfinfo == NULL)
 		rte_panic("Cannot allocate memory for private VF data\n");
 
-	RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_8_POOLS;
+	RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_8_POOLS;
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_vmdq_idx = vf_num;
 	RTE_ETH_DEV_SRIOV(eth_dev).def_pool_q_idx = (uint16_t)(vf_num * nb_queue);
diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c
index 278d5d2712af..a57dde59dbc0 100644
--- a/drivers/net/e1000/igb_rxtx.c
+++ b/drivers/net/e1000/igb_rxtx.c
@@ -111,7 +111,7 @@ struct igb_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;  /**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /**
@@ -185,7 +185,7 @@ struct igb_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igb_advctx_info ctx_cache[IGB_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 #if 1
@@ -1456,13 +1456,13 @@ igb_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	RTE_SET_USED(dev);
-	tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_SCTP_CKSUM  |
-			  DEV_TX_OFFLOAD_TCP_TSO     |
-			  DEV_TX_OFFLOAD_MULTI_SEGS;
+	tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	return tx_offload_capa;
 }
@@ -1635,20 +1635,20 @@ igb_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 
 	hw = E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP  |
-			  DEV_RX_OFFLOAD_VLAN_FILTER |
-			  DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_RX_OFFLOAD_UDP_CKSUM   |
-			  DEV_RX_OFFLOAD_TCP_CKSUM   |
-			  DEV_RX_OFFLOAD_JUMBO_FRAME |
-			  DEV_RX_OFFLOAD_KEEP_CRC    |
-			  DEV_RX_OFFLOAD_SCATTER     |
-			  DEV_RX_OFFLOAD_RSS_HASH;
+	rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			  RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			  RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+			  RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+			  RTE_ETH_RX_OFFLOAD_SCATTER     |
+			  RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == e1000_i350 ||
 	    hw->mac.type == e1000_i210 ||
 	    hw->mac.type == e1000_i211)
-		rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	return rx_offload_capa;
 }
@@ -1729,7 +1729,7 @@ eth_igb_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1963,23 +1963,23 @@ igb_hw_rss_hash_set(struct e1000_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = E1000_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= E1000_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	E1000_WRITE_REG(hw, E1000_MRQC, mrqc);
 }
@@ -2045,23 +2045,23 @@ int eth_igb_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & E1000_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -2183,15 +2183,15 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 			E1000_VMOLR_ROPE | E1000_VMOLR_BAM |
 			E1000_VMOLR_MPME);
 
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_UNTAG)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 			vmolr |= E1000_VMOLR_AUPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_MC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 			vmolr |= E1000_VMOLR_ROMPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_HASH_UC)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 			vmolr |= E1000_VMOLR_ROPE;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_BROADCAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 			vmolr |= E1000_VMOLR_BAM;
-		if (cfg->rx_mode & ETH_VMDQ_ACCEPT_MULTICAST)
+		if (cfg->rx_mode & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 			vmolr |= E1000_VMOLR_MPME;
 
 		E1000_WRITE_REG(hw, E1000_VMOLR(i), vmolr);
@@ -2227,9 +2227,9 @@ igb_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 	/* VLVF: set up filters for vlan tags as configured */
 	for (i = 0; i < cfg->nb_pool_maps; i++) {
 		/* set vlan id in VF register and set the valid bit */
-		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE | \
-                        (cfg->pool_map[i].vlan_id & ETH_VLAN_ID_MAX) | \
-			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT ) & \
+		E1000_WRITE_REG(hw, E1000_VLVF(i), (E1000_VLVF_VLANID_ENABLE |
+			(cfg->pool_map[i].vlan_id & RTE_ETH_VLAN_ID_MAX) |
+			((cfg->pool_map[i].pools << E1000_VLVF_POOLSEL_SHIFT) &
 			E1000_VLVF_POOLSEL_MASK)));
 	}
 
@@ -2281,7 +2281,7 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		E1000_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	uint32_t mrqc;
 
-	if (RTE_ETH_DEV_SRIOV(dev).active == ETH_8_POOLS) {
+	if (RTE_ETH_DEV_SRIOV(dev).active == RTE_ETH_8_POOLS) {
 		/*
 		 * SRIOV active scheme
 		 * FIXME if support RSS together with VMDq & SRIOV
@@ -2295,14 +2295,14 @@ igb_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-			case ETH_MQ_RX_RSS:
+			case RTE_ETH_MQ_RX_RSS:
 				igb_rss_configure(dev);
 				break;
-			case ETH_MQ_RX_VMDQ_ONLY:
+			case RTE_ETH_MQ_RX_VMDQ_ONLY:
 				/*Configure general VMDQ only RX parameters*/
 				igb_vmdq_rx_hw_configure(dev);
 				break;
-			case ETH_MQ_RX_NONE:
+			case RTE_ETH_MQ_RX_NONE:
 				/* if mq_mode is none, disable rss mode.*/
 			default:
 				igb_rss_disable(dev);
@@ -2342,7 +2342,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	/*
 	 * Configure support of jumbo frames, if any.
 	 */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		uint32_t max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
 
 		rctl |= E1000_RCTL_LPE;
@@ -2351,7 +2351,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Set maximum packet length by default, and might be updated
 		 * together with enabling/disabling dual VLAN.
 		 */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			max_len += VLAN_TAG_SIZE;
 
 		E1000_WRITE_REG(hw, E1000_RLPML, max_len);
@@ -2387,7 +2387,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 *  call to configure
 		 */
-		if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -2458,7 +2458,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
@@ -2502,16 +2502,16 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= E1000_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= E1000_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_IPOFL;
 	if (rxmode->offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM))
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		rxcsum |= E1000_RXCSUM_TUOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_TUOFL;
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= E1000_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~E1000_RXCSUM_CRCOFL;
@@ -2519,7 +2519,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 	E1000_WRITE_REG(hw, E1000_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		rctl &= ~E1000_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 
 		/* clear STRCRC bit in all queues */
@@ -2559,7 +2559,7 @@ eth_igb_rx_init(struct rte_eth_dev *dev)
 		(hw->mac.mc_filter_type << E1000_RCTL_MO_SHIFT);
 
 	/* Make sure VLAN Filters are off. */
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_VMDQ_ONLY)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_ONLY)
 		rctl &= ~E1000_RCTL_VFE;
 	/* Don't store bad packets. */
 	rctl &= ~E1000_RCTL_SBP;
@@ -2758,7 +2758,7 @@ eth_igbvf_rx_init(struct rte_eth_dev *dev)
 		E1000_WRITE_REG(hw, E1000_RXDCTL(i), rxdctl);
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		if (!dev->data->scattered_rx)
 			PMD_INIT_LOG(DEBUG, "forcing scatter mode");
 		dev->rx_pkt_burst = eth_igb_recv_scattered_pkts;
diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c
index 4cebf60a68a7..4e3ee72608f4 100644
--- a/drivers/net/ena/ena_ethdev.c
+++ b/drivers/net/ena/ena_ethdev.c
@@ -116,10 +116,10 @@ static const struct ena_stats ena_stats_rx_strings[] = {
 #define ENA_STATS_ARRAY_TX	ARRAY_SIZE(ena_stats_tx_strings)
 #define ENA_STATS_ARRAY_RX	ARRAY_SIZE(ena_stats_rx_strings)
 
-#define QUEUE_OFFLOADS (DEV_TX_OFFLOAD_TCP_CKSUM |\
-			DEV_TX_OFFLOAD_UDP_CKSUM |\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define QUEUE_OFFLOADS (RTE_ETH_TX_OFFLOAD_TCP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 #define MBUF_OFFLOADS (PKT_TX_L4_MASK |\
 		       PKT_TX_IP_CKSUM |\
 		       PKT_TX_TCP_SEG)
@@ -310,7 +310,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 	    (queue_offloads & QUEUE_OFFLOADS)) {
 		/* check if TSO is required */
 		if ((mbuf->ol_flags & PKT_TX_TCP_SEG) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_TSO)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 			ena_tx_ctx->tso_enable = true;
 
 			ena_meta->l4_hdr_len = GET_L4_HDR_LEN(mbuf);
@@ -318,7 +318,7 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L3 checksum is needed */
 		if ((mbuf->ol_flags & PKT_TX_IP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM))
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM))
 			ena_tx_ctx->l3_csum_enable = true;
 
 		if (mbuf->ol_flags & PKT_TX_IPV6) {
@@ -335,12 +335,12 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf,
 
 		/* check if L4 checksum is needed */
 		if (((mbuf->ol_flags & PKT_TX_L4_MASK) == PKT_TX_TCP_CKSUM) &&
-		    (queue_offloads & DEV_TX_OFFLOAD_TCP_CKSUM)) {
+		    (queue_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_TCP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else if (((mbuf->ol_flags & PKT_TX_L4_MASK) ==
 				PKT_TX_UDP_CKSUM) &&
-				(queue_offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+				(queue_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 			ena_tx_ctx->l4_proto = ENA_ETH_IO_L4_PROTO_UDP;
 			ena_tx_ctx->l4_csum_enable = true;
 		} else {
@@ -623,9 +623,9 @@ static int ena_link_update(struct rte_eth_dev *dev,
 	struct rte_eth_link *link = &dev->data->dev_link;
 	struct ena_adapter *adapter = dev->data->dev_private;
 
-	link->link_status = adapter->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
-	link->link_speed = ETH_SPEED_NUM_NONE;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = adapter->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
+	link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return 0;
 }
@@ -684,7 +684,7 @@ static uint32_t ena_get_mtu_conf(struct ena_adapter *adapter)
 	uint32_t max_frame_len = adapter->max_mtu;
 
 	if (adapter->edev_data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_JUMBO_FRAME)
+	    RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		max_frame_len =
 			adapter->edev_data->dev_conf.rxmode.max_rx_pkt_len;
 
@@ -915,7 +915,7 @@ static int ena_start(struct rte_eth_dev *dev)
 	if (rc)
 		goto err_start_tx;
 
-	if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) {
+	if (adapter->edev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
 		rc = ena_rss_configure(adapter);
 		if (rc)
 			goto err_rss_init;
@@ -1854,9 +1854,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev)
 
 	adapter->state = ENA_ADAPTER_STATE_CONFIG;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
-	dev->data->dev_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
+	dev->data->dev_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	adapter->tx_selected_offloads = dev->data->dev_conf.txmode.offloads;
 	adapter->rx_selected_offloads = dev->data->dev_conf.rxmode.offloads;
@@ -1907,36 +1907,36 @@ static int ena_infos_get(struct rte_eth_dev *dev,
 	ena_assert_msg(ena_dev != NULL, "Uninitialized device\n");
 
 	dev_info->speed_capa =
-			ETH_LINK_SPEED_1G   |
-			ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_5G   |
-			ETH_LINK_SPEED_10G  |
-			ETH_LINK_SPEED_25G  |
-			ETH_LINK_SPEED_40G  |
-			ETH_LINK_SPEED_50G  |
-			ETH_LINK_SPEED_100G;
+			RTE_ETH_LINK_SPEED_1G   |
+			RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_5G   |
+			RTE_ETH_LINK_SPEED_10G  |
+			RTE_ETH_LINK_SPEED_25G  |
+			RTE_ETH_LINK_SPEED_40G  |
+			RTE_ETH_LINK_SPEED_50G  |
+			RTE_ETH_LINK_SPEED_100G;
 
 	/* Set Tx & Rx features available for device */
 	if (adapter->offloads.tso4_supported)
-		tx_feat	|= DEV_TX_OFFLOAD_TCP_TSO;
+		tx_feat	|= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (adapter->offloads.tx_csum_supported)
-		tx_feat |= DEV_TX_OFFLOAD_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM;
+		tx_feat |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (adapter->offloads.rx_csum_supported)
-		rx_feat |= DEV_RX_OFFLOAD_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM  |
-			DEV_RX_OFFLOAD_TCP_CKSUM;
+		rx_feat |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
-	rx_feat |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-	tx_feat |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	rx_feat |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+	tx_feat |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Inform framework about available features */
 	dev_info->rx_offload_capa = rx_feat;
 	if (adapter->offloads.rss_hash_supported)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->rx_queue_offload_capa = rx_feat;
 	dev_info->tx_offload_capa = tx_feat;
 	dev_info->tx_queue_offload_capa = tx_feat;
@@ -2100,7 +2100,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
 	}
 #endif
 
-	fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH;
+	fill_hash = rx_ring->offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	descs_in_use = rx_ring->ring_size -
 		ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1;
diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h
index 06ac8b06b5cb..3b1844e50982 100644
--- a/drivers/net/ena/ena_ethdev.h
+++ b/drivers/net/ena/ena_ethdev.h
@@ -54,8 +54,8 @@
 
 #define ENA_HASH_KEY_SIZE		40
 
-#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-			ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP)
+#define ENA_ALL_RSS_HF (RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define ENA_IO_TXQ_IDX(q)		(2 * (q))
 #define ENA_IO_RXQ_IDX(q)		(2 * (q) + 1)
diff --git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c
index 88afe13da04d..e7b57659491d 100644
--- a/drivers/net/ena/ena_rss.c
+++ b/drivers/net/ena/ena_rss.c
@@ -76,7 +76,7 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 	if (reta_size == 0 || reta_conf == NULL)
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -93,8 +93,8 @@ int ena_rss_reta_update(struct rte_eth_dev *dev,
 		/* Each reta_conf is for 64 entries.
 		 * To support 128 we use 2 conf of 64.
 		 */
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[conf_idx].mask, idx)) {
 			entry_value =
 				ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]);
@@ -137,10 +137,10 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	int reta_idx;
 
 	if (reta_size == 0 || reta_conf == NULL ||
-	    (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL)))
+	    (reta_size > RTE_ETH_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL)))
 		return -EINVAL;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR,
 			"RSS was not configured for the PMD\n");
 		return -ENOTSUP;
@@ -155,8 +155,8 @@ int ena_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0 ; i < reta_size ; i++) {
-		reta_conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		reta_conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx))
 			reta_conf[reta_conf_idx].reta[reta_idx] =
 				ENA_IO_RXQ_IDX_REV(indirect_table[i]);
@@ -200,34 +200,34 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Convert proto to ETH flag */
 	switch (proto) {
 	case ENA_ADMIN_RSS_TCP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP4:
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		break;
 	case ENA_ADMIN_RSS_TCP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 		break;
 	case ENA_ADMIN_RSS_UDP6:
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 		break;
 	case ENA_ADMIN_RSS_IP4:
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 		break;
 	case ENA_ADMIN_RSS_IP6:
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 		break;
 	case ENA_ADMIN_RSS_IP4_FRAG:
-		rss_hf |= ETH_RSS_FRAG_IPV4;
+		rss_hf |= RTE_ETH_RSS_FRAG_IPV4;
 		break;
 	case ENA_ADMIN_RSS_NOT_IP:
-		rss_hf |= ETH_RSS_L2_PAYLOAD;
+		rss_hf |= RTE_ETH_RSS_L2_PAYLOAD;
 		break;
 	case ENA_ADMIN_RSS_TCP6_EX:
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 		break;
 	case ENA_ADMIN_RSS_IP6_EX:
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 		break;
 	default:
 		break;
@@ -236,10 +236,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L3. */
 	switch (fields & ENA_HF_RSS_ALL_L3) {
 	case ENA_ADMIN_RSS_L3_SA:
-		rss_hf |= ETH_RSS_L3_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L3_DA:
-		rss_hf |= ETH_RSS_L3_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L3_DST_ONLY;
 		break;
 	default:
 		break;
@@ -248,10 +248,10 @@ static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto,
 	/* Check if only DA or SA is being used for L4. */
 	switch (fields & ENA_HF_RSS_ALL_L4) {
 	case ENA_ADMIN_RSS_L4_SP:
-		rss_hf |= ETH_RSS_L4_SRC_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_SRC_ONLY;
 		break;
 	case ENA_ADMIN_RSS_L4_DP:
-		rss_hf |= ETH_RSS_L4_DST_ONLY;
+		rss_hf |= RTE_ETH_RSS_L4_DST_ONLY;
 		break;
 	default:
 		break;
@@ -269,11 +269,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA;
 
 	/* Determine which fields of L3 should be used. */
-	switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) {
-	case ETH_RSS_L3_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) {
+	case RTE_ETH_RSS_L3_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_DA;
 		break;
-	case ETH_RSS_L3_SRC_ONLY:
+	case RTE_ETH_RSS_L3_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L3_SA;
 		break;
 	default:
@@ -285,11 +285,11 @@ static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto,
 	}
 
 	/* Determine which fields of L4 should be used. */
-	switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) {
-	case ETH_RSS_L4_DST_ONLY:
+	switch (rss_hf & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) {
+	case RTE_ETH_RSS_L4_DST_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_DP;
 		break;
-	case ETH_RSS_L4_SRC_ONLY:
+	case RTE_ETH_RSS_L4_SRC_ONLY:
 		fields_mask |= ENA_ADMIN_RSS_L4_SP;
 		break;
 	default:
@@ -335,43 +335,43 @@ static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf)
 	int rc, i;
 
 	/* Turn on appropriate fields for each requested packet type */
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0)
+	if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0)
 		selected_fields[ENA_ADMIN_RSS_UDP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf);
 
-	if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0)
+	if ((rss_hf & RTE_ETH_RSS_FRAG_IPV4) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf);
 
-	if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0)
+	if ((rss_hf & RTE_ETH_RSS_L2_PAYLOAD) != 0)
 		selected_fields[ENA_ADMIN_RSS_NOT_IP].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf);
 
-	if ((rss_hf & ETH_RSS_IPV6_EX) != 0)
+	if ((rss_hf & RTE_ETH_RSS_IPV6_EX) != 0)
 		selected_fields[ENA_ADMIN_RSS_IP6_EX].fields =
 			ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf);
 
@@ -542,7 +542,7 @@ int ena_rss_hash_conf_get(struct rte_eth_dev *dev,
 	uint16_t admin_hf;
 	static bool warn_once;
 
-	if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (!(dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n");
 		return -ENOTSUP;
 	}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index b496cd470045..e0fb44edeb41 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -100,27 +100,27 @@ enetc_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	status = enetc_port_rd(enetc_hw, ENETC_PM0_STATUS);
 
 	if (status & ENETC_LINK_MODE)
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	else
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 
 	if (status & ENETC_LINK_STATUS)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	switch (status & ENETC_LINK_SPEED_MASK) {
 	case ENETC_LINK_SPEED_1G:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case ENETC_LINK_SPEED_100M:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	default:
 	case ENETC_LINK_SPEED_10M:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -207,11 +207,11 @@ enetc_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 	dev_info->max_tx_queues = MAX_TX_RINGS;
 	dev_info->max_rx_pktlen = ENETC_MAC_MAXFRM_SIZE;
 	dev_info->rx_offload_capa =
-		(DEV_RX_OFFLOAD_IPV4_CKSUM |
-		 DEV_RX_OFFLOAD_UDP_CKSUM |
-		 DEV_RX_OFFLOAD_TCP_CKSUM |
-		 DEV_RX_OFFLOAD_KEEP_CRC |
-		 DEV_RX_OFFLOAD_JUMBO_FRAME);
+		(RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		 RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
 
 	return 0;
 }
@@ -462,7 +462,7 @@ enetc_rx_queue_setup(struct rte_eth_dev *dev,
 			       RTE_ETH_QUEUE_STATE_STOPPED;
 	}
 
-	rx_ring->crc_len = (uint8_t)((rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+	rx_ring->crc_len = (uint8_t)((rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				     RTE_ETHER_CRC_LEN : 0);
 
 	return 0;
@@ -679,10 +679,10 @@ enetc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > ENETC_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads &=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	enetc_port_wr(enetc_hw, ENETC_PTCMSDUR(0), ENETC_MAC_MAXFRM_SIZE);
 	enetc_port_wr(enetc_hw, ENETC_PTXMBAR, 2 * ENETC_MAC_MAXFRM_SIZE);
@@ -708,7 +708,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		uint32_t max_len;
 
 		max_len = dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -723,7 +723,7 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 			RTE_ETHER_CRC_LEN;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		int config;
 
 		config = enetc_port_rd(enetc_hw, ENETC_PM0_CMD_CFG);
@@ -731,10 +731,10 @@ enetc_dev_configure(struct rte_eth_dev *dev)
 		enetc_port_wr(enetc_hw, ENETC_PM0_CMD_CFG, config);
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		checksum &= ~L3_CKSUM;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		checksum &= ~L4_CKSUM;
 
 	enetc_port_wr(enetc_hw, ENETC_PAR_PORT_CFG, checksum);
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 47bfdac2cfdd..d5493c98345d 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -178,7 +178,7 @@ struct enic {
 	 */
 	uint8_t rss_hash_type; /* NIC_CFG_RSS_HASH_TYPE flags */
 	uint8_t rss_enable;
-	uint64_t rss_hf; /* ETH_RSS flags */
+	uint64_t rss_hf; /* RTE_ETH_RSS flags */
 	union vnic_rss_key rss_key;
 	union vnic_rss_cpu rss_cpu;
 
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index 8d5797523b8f..30cd1d4f5dd1 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -38,30 +38,30 @@ static const struct vic_speed_capa {
 	uint16_t sub_devid;
 	uint32_t capa;
 } vic_speed_capa_map[] = {
-	{ 0x0043, ETH_LINK_SPEED_10G }, /* VIC */
-	{ 0x0047, ETH_LINK_SPEED_10G }, /* P81E PCIe */
-	{ 0x0048, ETH_LINK_SPEED_10G }, /* M81KR Mezz */
-	{ 0x004f, ETH_LINK_SPEED_10G }, /* 1280 Mezz */
-	{ 0x0084, ETH_LINK_SPEED_10G }, /* 1240 MLOM */
-	{ 0x0085, ETH_LINK_SPEED_10G }, /* 1225 PCIe */
-	{ 0x00cd, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1285 PCIe */
-	{ 0x00ce, ETH_LINK_SPEED_10G }, /* 1225T PCIe */
-	{ 0x012a, ETH_LINK_SPEED_40G }, /* M4308 */
-	{ 0x012c, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1340 MLOM */
-	{ 0x012e, ETH_LINK_SPEED_10G }, /* 1227 PCIe */
-	{ 0x0137, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1380 Mezz */
-	{ 0x014d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1385 PCIe */
-	{ 0x015d, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_40G }, /* 1387 MLOM */
-	{ 0x0215, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1440 Mezz */
-	{ 0x0216, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-		  ETH_LINK_SPEED_40G }, /* 1480 MLOM */
-	{ 0x0217, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1455 PCIe */
-	{ 0x0218, ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G }, /* 1457 MLOM */
-	{ 0x0219, ETH_LINK_SPEED_40G }, /* 1485 PCIe */
-	{ 0x021a, ETH_LINK_SPEED_40G }, /* 1487 MLOM */
-	{ 0x024a, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1495 PCIe */
-	{ 0x024b, ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G }, /* 1497 MLOM */
+	{ 0x0043, RTE_ETH_LINK_SPEED_10G }, /* VIC */
+	{ 0x0047, RTE_ETH_LINK_SPEED_10G }, /* P81E PCIe */
+	{ 0x0048, RTE_ETH_LINK_SPEED_10G }, /* M81KR Mezz */
+	{ 0x004f, RTE_ETH_LINK_SPEED_10G }, /* 1280 Mezz */
+	{ 0x0084, RTE_ETH_LINK_SPEED_10G }, /* 1240 MLOM */
+	{ 0x0085, RTE_ETH_LINK_SPEED_10G }, /* 1225 PCIe */
+	{ 0x00cd, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1285 PCIe */
+	{ 0x00ce, RTE_ETH_LINK_SPEED_10G }, /* 1225T PCIe */
+	{ 0x012a, RTE_ETH_LINK_SPEED_40G }, /* M4308 */
+	{ 0x012c, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1340 MLOM */
+	{ 0x012e, RTE_ETH_LINK_SPEED_10G }, /* 1227 PCIe */
+	{ 0x0137, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1380 Mezz */
+	{ 0x014d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1385 PCIe */
+	{ 0x015d, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_40G }, /* 1387 MLOM */
+	{ 0x0215, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1440 Mezz */
+	{ 0x0216, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+		  RTE_ETH_LINK_SPEED_40G }, /* 1480 MLOM */
+	{ 0x0217, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1455 PCIe */
+	{ 0x0218, RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G }, /* 1457 MLOM */
+	{ 0x0219, RTE_ETH_LINK_SPEED_40G }, /* 1485 PCIe */
+	{ 0x021a, RTE_ETH_LINK_SPEED_40G }, /* 1487 MLOM */
+	{ 0x024a, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1495 PCIe */
+	{ 0x024b, RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G }, /* 1497 MLOM */
 	{ 0, 0 }, /* End marker */
 };
 
@@ -293,8 +293,8 @@ static int enicpmd_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	ENICPMD_FUNC_TRACE();
 
 	offloads = eth_dev->data->dev_conf.rxmode.offloads;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			enic->ig_vlan_strip_en = 1;
 		else
 			enic->ig_vlan_strip_en = 0;
@@ -319,17 +319,17 @@ static int enicpmd_dev_configure(struct rte_eth_dev *eth_dev)
 		return ret;
 	}
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	enic->mc_count = 0;
 	enic->hw_ip_checksum = !!(eth_dev->data->dev_conf.rxmode.offloads &
-				  DEV_RX_OFFLOAD_CHECKSUM);
+				  RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* All vlan offload masks to apply the current settings */
-	mask = ETH_VLAN_STRIP_MASK |
-		ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+		RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = enicpmd_vlan_offload_set(eth_dev, mask);
 	if (ret) {
 		dev_err(enic, "Failed to configure VLAN offloads\n");
@@ -431,14 +431,14 @@ static uint32_t speed_capa_from_pci_id(struct rte_eth_dev *eth_dev)
 	}
 	/* 1300 and later models are at least 40G */
 	if (id >= 0x0100)
-		return ETH_LINK_SPEED_40G;
+		return RTE_ETH_LINK_SPEED_40G;
 	/* VFs have subsystem id 0, check device id */
 	if (id == 0) {
 		/* Newer VF implies at least 40G model */
 		if (pdev->id.device_id == PCI_DEVICE_ID_CISCO_VIC_ENET_SN)
-			return ETH_LINK_SPEED_40G;
+			return RTE_ETH_LINK_SPEED_40G;
 	}
-	return ETH_LINK_SPEED_10G;
+	return RTE_ETH_LINK_SPEED_10G;
 }
 
 static int enicpmd_dev_info_get(struct rte_eth_dev *eth_dev,
@@ -770,8 +770,8 @@ static int enicpmd_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = enic_sop_rq_idx_to_rte_idx(
 				enic->rss_cpu.cpu[i / 4].b[i % 4]);
@@ -802,8 +802,8 @@ static int enicpmd_dev_rss_reta_update(struct rte_eth_dev *dev,
 	 */
 	rss_cpu = enic->rss_cpu;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			rss_cpu.cpu[i / 4].b[i % 4] =
 				enic_rte_rq_idx_to_sop_idx(
@@ -879,7 +879,7 @@ static void enicpmd_dev_rxq_info_get(struct rte_eth_dev *dev,
 	 */
 	conf->offloads = enic->rx_offload_capa;
 	if (!enic->ig_vlan_strip_en)
-		conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* rx_thresh and other fields are not applicable for enic */
 }
 
@@ -965,8 +965,8 @@ static int enicpmd_dev_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
 static int udp_tunnel_common_check(struct enic *enic,
 				   struct rte_eth_udp_tunnel *tnl)
 {
-	if (tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN &&
-	    tnl->prot_type != RTE_TUNNEL_TYPE_GENEVE)
+	if (tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN &&
+	    tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
 		return -ENOTSUP;
 	if (!enic->overlay_offload) {
 		ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
@@ -1006,7 +1006,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
@@ -1035,7 +1035,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
 	ret = udp_tunnel_common_check(enic, tnl);
 	if (ret)
 		return ret;
-	vxlan = (tnl->prot_type == RTE_TUNNEL_TYPE_VXLAN);
+	vxlan = (tnl->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN);
 	if (vxlan)
 		port = enic->vxlan_port;
 	else
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2affd380c6a4..754cf362c6d8 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -430,7 +430,7 @@ int enic_link_update(struct rte_eth_dev *eth_dev)
 
 	memset(&link, 0, sizeof(link));
 	link.link_status = enic_get_link_status(enic);
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_speed = vnic_dev_port_speed(enic->vdev);
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -597,7 +597,7 @@ int enic_enable(struct enic *enic)
 	}
 
 	eth_dev->data->dev_link.link_speed = vnic_dev_port_speed(enic->vdev);
-	eth_dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	eth_dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* vnic notification of link status has already been turned on in
 	 * enic_dev_init() which is called during probe time.  Here we are
@@ -638,11 +638,11 @@ int enic_enable(struct enic *enic)
 	 * and vlan insertion are supported.
 	 */
 	simple_tx_offloads = enic->tx_offload_capa &
-		(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_VLAN_INSERT |
-		 DEV_TX_OFFLOAD_IPV4_CKSUM |
-		 DEV_TX_OFFLOAD_UDP_CKSUM |
-		 DEV_TX_OFFLOAD_TCP_CKSUM);
+		(RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		 RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if ((eth_dev->data->dev_conf.txmode.offloads &
 	     ~simple_tx_offloads) == 0) {
 		ENICPMD_LOG(DEBUG, " use the simple tx handler");
@@ -858,7 +858,7 @@ int enic_alloc_rq(struct enic *enic, uint16_t queue_idx,
 	max_rx_pkt_len = enic->rte_dev->data->dev_conf.rxmode.max_rx_pkt_len;
 
 	if (enic->rte_dev->data->dev_conf.rxmode.offloads &
-	    DEV_RX_OFFLOAD_SCATTER) {
+	    RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev_info(enic, "Rq %u Scatter rx mode enabled\n", queue_idx);
 		/* ceil((max pkt len)/mbuf_size) */
 		mbufs_per_pkt = (max_rx_pkt_len + mbuf_size - 1) / mbuf_size;
@@ -1386,15 +1386,15 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 	rss_hash_type = 0;
 	rss_hf = rss_conf->rss_hf & enic->flow_type_rss_offloads;
 	if (enic->rq_count > 1 &&
-	    (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) &&
+	    (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) &&
 	    rss_hf != 0) {
 		rss_enable = 1;
-		if (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			      ETH_RSS_NONFRAG_IPV4_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV4;
 			if (enic->udp_rss_weak) {
 				/*
@@ -1405,12 +1405,12 @@ int enic_set_rss_conf(struct enic *enic, struct rte_eth_rss_conf *rss_conf)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV4;
 			}
 		}
-		if (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_IPV6_EX |
-			      ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER))
+		if (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_IPV6_EX |
+			      RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX))
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX))
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
-		if (rss_hf & (ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX)) {
+		if (rss_hf & (RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX)) {
 			rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_UDP_IPV6;
 			if (enic->udp_rss_weak)
 				rss_hash_type |= NIC_CFG_RSS_HASH_TYPE_TCP_IPV6;
@@ -1751,9 +1751,9 @@ enic_enable_overlay_offload(struct enic *enic)
 		return -EINVAL;
 	}
 	enic->tx_offload_capa |=
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		(enic->geneve ? DEV_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
-		(enic->vxlan ? DEV_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		(enic->geneve ? RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO : 0) |
+		(enic->vxlan ? RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO : 0);
 	enic->tx_offload_mask |=
 		PKT_TX_OUTER_IPV6 |
 		PKT_TX_OUTER_IPV4 |
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index a8f5332a407f..12f734260ca5 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -147,31 +147,31 @@ int enic_get_vnic_config(struct enic *enic)
 		 * IPV4 hash type handles both non-frag and frag packet types.
 		 * TCP/UDP is controlled via a separate flag below.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV4 |
-			ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV4 |
+			RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_TCP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (ENIC_SETTING(enic, RSSHASH_IPV6))
 		/*
 		 * The VIC adapter can perform RSS on IPv6 packets with and
 		 * without extension headers. An IPv6 "fragment" is an IPv6
 		 * packet with the fragment extension header.
 		 */
-		enic->flow_type_rss_offloads |= ETH_RSS_IPV6 |
-			ETH_RSS_IPV6_EX | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_IPV6 |
+			RTE_ETH_RSS_IPV6_EX | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER;
 	if (ENIC_SETTING(enic, RSSHASH_TCPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_TCP |
-			ETH_RSS_IPV6_TCP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			RTE_ETH_RSS_IPV6_TCP_EX;
 	if (enic->udp_rss_weak)
 		enic->flow_type_rss_offloads |=
-			ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+			RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV4))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV4_UDP;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (ENIC_SETTING(enic, RSSHASH_UDPIPV6))
-		enic->flow_type_rss_offloads |= ETH_RSS_NONFRAG_IPV6_UDP |
-			ETH_RSS_IPV6_UDP_EX;
+		enic->flow_type_rss_offloads |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			RTE_ETH_RSS_IPV6_UDP_EX;
 
 	/* Zero offloads if RSS is not enabled */
 	if (!ENIC_SETTING(enic, RSS))
@@ -201,20 +201,20 @@ int enic_get_vnic_config(struct enic *enic)
 	enic->tx_queue_offload_capa = 0;
 	enic->tx_offload_capa =
 		enic->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	enic->rx_offload_capa =
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	enic->tx_offload_mask =
 		PKT_TX_IPV6 |
 		PKT_TX_IPV4 |
diff --git a/drivers/net/failsafe/failsafe.c b/drivers/net/failsafe/failsafe.c
index 8216063a3d8b..9b22a6ce8941 100644
--- a/drivers/net/failsafe/failsafe.c
+++ b/drivers/net/failsafe/failsafe.c
@@ -17,10 +17,10 @@
 
 const char pmd_failsafe_driver_name[] = FAILSAFE_DRIVER_NAME;
 static const struct rte_eth_link eth_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_UP,
-	.link_autoneg = ETH_LINK_AUTONEG,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_UP,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG,
 };
 
 static int
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 602c04033c18..5f4810051dac 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -326,7 +326,7 @@ int failsafe_rx_intr_install_subdevice(struct sub_device *sdev)
 	int qid;
 	struct rte_eth_dev *fsdev;
 	struct rxq **rxq;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 				&ETH(sdev)->data->dev_conf.intr_conf;
 
 	fsdev = fs_dev(sdev);
@@ -519,7 +519,7 @@ int
 failsafe_rx_intr_install(struct rte_eth_dev *dev)
 {
 	struct fs_priv *priv = PRIV(dev);
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 			&priv->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq == 0 || dev->intr_handle != NULL)
diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c
index 5ff33e03e034..8cb215651df8 100644
--- a/drivers/net/failsafe/failsafe_ops.c
+++ b/drivers/net/failsafe/failsafe_ops.c
@@ -1182,53 +1182,53 @@ fs_dev_infos_get(struct rte_eth_dev *dev,
 	 * configuring a sub-device.
 	 */
 	infos->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->rx_queue_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_LRO |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_MACSEC_STRIP |
-		DEV_RX_OFFLOAD_HEADER_SPLIT |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_TIMESTAMP |
-		DEV_RX_OFFLOAD_SECURITY |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_LRO |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_MACSEC_STRIP |
+		RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+		RTE_ETH_RX_OFFLOAD_SECURITY |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	infos->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	infos->flow_type_rss_offloads =
-		ETH_RSS_IP |
-		ETH_RSS_UDP |
-		ETH_RSS_TCP;
+		RTE_ETH_RSS_IP |
+		RTE_ETH_RSS_UDP |
+		RTE_ETH_RSS_TCP;
 	infos->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 		RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
diff --git a/drivers/net/fm10k/fm10k.h b/drivers/net/fm10k/fm10k.h
index 916b856acc4b..7af115399e0f 100644
--- a/drivers/net/fm10k/fm10k.h
+++ b/drivers/net/fm10k/fm10k.h
@@ -177,7 +177,7 @@ struct fm10k_rx_queue {
 	uint8_t drop_en;
 	uint8_t rx_deferred_start; /* don't start this queue in dev start. */
 	uint16_t rx_ftag_en; /* indicates FTAG RX supported */
-	uint64_t offloads; /* offloads of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /* offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /*
@@ -209,7 +209,7 @@ struct fm10k_tx_queue {
 	uint16_t next_rs; /* Next pos to set RS flag */
 	uint16_t next_dd; /* Next pos to check DD flag */
 	volatile uint32_t *tail_ptr;
-	uint64_t offloads; /* Offloads of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /* Offloads of RTE_ETH_TX_OFFLOAD_* */
 	uint16_t nb_desc;
 	uint16_t port_id;
 	uint8_t tx_deferred_start; /** don't start this queue in dev start. */
diff --git a/drivers/net/fm10k/fm10k_ethdev.c b/drivers/net/fm10k/fm10k_ethdev.c
index 3236290e4021..e77cfa3f9882 100644
--- a/drivers/net/fm10k/fm10k_ethdev.c
+++ b/drivers/net/fm10k/fm10k_ethdev.c
@@ -413,12 +413,12 @@ fm10k_check_mq_mode(struct rte_eth_dev *dev)
 
 	vmdq_conf = &dev->data->dev_conf.rx_adv_conf.vmdq_rx_conf;
 
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		PMD_INIT_LOG(ERR, "DCB mode is not supported.");
 		return -EINVAL;
 	}
 
-	if (!(rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		return 0;
 
 	if (hw->mac.type == fm10k_mac_vf) {
@@ -449,8 +449,8 @@ fm10k_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = fm10k_check_mq_mode(dev);
@@ -510,7 +510,7 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 		0x6A, 0x42, 0xB7, 0x3B, 0xBE, 0xAC, 0x01, 0xFA,
 	};
 
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_RSS ||
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_RSS ||
 		dev_conf->rx_adv_conf.rss_conf.rss_hf == 0) {
 		FM10K_WRITE_REG(hw, FM10K_MRQC(0), 0);
 		return;
@@ -547,15 +547,15 @@ fm10k_dev_rss_configure(struct rte_eth_dev *dev)
 	 */
 	hf = dev_conf->rx_adv_conf.rss_conf.rss_hf;
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	if (mrqc == 0) {
 		PMD_INIT_LOG(ERR, "Specified RSS mode 0x%"PRIx64"is not"
@@ -602,7 +602,7 @@ fm10k_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	if (hw->mac.type != fm10k_mac_pf)
 		return;
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		nb_queue_pools = vmdq_conf->nb_queue_pools;
 
 	/* no pool number change, no need to update logic port and VLAN/MAC */
@@ -759,7 +759,7 @@ fm10k_dev_rx_init(struct rte_eth_dev *dev)
 		/* It adds dual VLAN length for supporting dual VLAN */
 		if ((dev->data->dev_conf.rxmode.max_rx_pkt_len +
 				2 * FM10K_VLAN_TAG_SIZE) > buf_size ||
-			rxq->offloads & DEV_RX_OFFLOAD_SCATTER) {
+			rxq->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 			uint32_t reg;
 			dev->data->scattered_rx = 1;
 			reg = FM10K_READ_REG(hw, FM10K_SRRCTL(i));
@@ -1145,7 +1145,7 @@ fm10k_dev_start(struct rte_eth_dev *dev)
 	}
 
 	/* Update default vlan when not in VMDQ mode */
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG))
 		fm10k_vlan_filter_set(dev, hw->mac.default_vid, true);
 
 	fm10k_link_update(dev, 0);
@@ -1222,11 +1222,11 @@ fm10k_link_update(struct rte_eth_dev *dev,
 		FM10K_DEV_PRIVATE_TO_INFO(dev->data->dev_private);
 	PMD_INIT_FUNC_TRACE();
 
-	dev->data->dev_link.link_speed  = ETH_SPEED_NUM_50G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_speed  = RTE_ETH_SPEED_NUM_50G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	dev->data->dev_link.link_status =
-		dev_info->sm_down ? ETH_LINK_DOWN : ETH_LINK_UP;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
+		dev_info->sm_down ? RTE_ETH_LINK_DOWN : RTE_ETH_LINK_UP;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return 0;
 }
@@ -1378,7 +1378,7 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 	dev_info->max_vfs            = pdev->max_vfs;
 	dev_info->vmdq_pool_base     = 0;
 	dev_info->vmdq_queue_base    = 0;
-	dev_info->max_vmdq_pools     = ETH_32_POOLS;
+	dev_info->max_vmdq_pools     = RTE_ETH_32_POOLS;
 	dev_info->vmdq_queue_num     = FM10K_MAX_QUEUES_PF;
 	dev_info->rx_queue_offload_capa = fm10k_get_rx_queue_offloads_capa(dev);
 	dev_info->rx_offload_capa = fm10k_get_rx_port_offloads_capa(dev) |
@@ -1389,15 +1389,15 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 
 	dev_info->hash_key_size = FM10K_RSSRK_SIZE * sizeof(uint32_t);
 	dev_info->reta_size = FM10K_MAX_RSS_INDICES;
-	dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-					ETH_RSS_IPV6 |
-					ETH_RSS_IPV6_EX |
-					ETH_RSS_NONFRAG_IPV4_TCP |
-					ETH_RSS_NONFRAG_IPV6_TCP |
-					ETH_RSS_IPV6_TCP_EX |
-					ETH_RSS_NONFRAG_IPV4_UDP |
-					ETH_RSS_NONFRAG_IPV6_UDP |
-					ETH_RSS_IPV6_UDP_EX;
+	dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+					RTE_ETH_RSS_IPV6 |
+					RTE_ETH_RSS_IPV6_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+					RTE_ETH_RSS_IPV6_TCP_EX |
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+					RTE_ETH_RSS_IPV6_UDP_EX;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -1435,9 +1435,9 @@ fm10k_dev_infos_get(struct rte_eth_dev *dev,
 		.nb_mtu_seg_max = FM10K_TX_MAX_MTU_SEG,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G |
-			ETH_LINK_SPEED_10G | ETH_LINK_SPEED_25G |
-			ETH_LINK_SPEED_40G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G |
+			RTE_ETH_LINK_SPEED_10G | RTE_ETH_LINK_SPEED_25G |
+			RTE_ETH_LINK_SPEED_40G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1509,7 +1509,7 @@ fm10k_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
 		return -EINVAL;
 	}
 
-	if (vlan_id > ETH_VLAN_ID_MAX) {
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX) {
 		PMD_INIT_LOG(ERR, "Invalid vlan_id: must be < 4096");
 		return -EINVAL;
 	}
@@ -1767,21 +1767,21 @@ static uint64_t fm10k_get_rx_queue_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_RX_OFFLOAD_SCATTER);
+	return (uint64_t)(RTE_ETH_RX_OFFLOAD_SCATTER);
 }
 
 static uint64_t fm10k_get_rx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return  (uint64_t)(DEV_RX_OFFLOAD_VLAN_STRIP  |
-			   DEV_RX_OFFLOAD_VLAN_FILTER |
-			   DEV_RX_OFFLOAD_IPV4_CKSUM  |
-			   DEV_RX_OFFLOAD_UDP_CKSUM   |
-			   DEV_RX_OFFLOAD_TCP_CKSUM   |
-			   DEV_RX_OFFLOAD_JUMBO_FRAME |
-			   DEV_RX_OFFLOAD_HEADER_SPLIT |
-			   DEV_RX_OFFLOAD_RSS_HASH);
+	return  (uint64_t)(RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+			   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			   RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+			   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+			   RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+			   RTE_ETH_RX_OFFLOAD_HEADER_SPLIT |
+			   RTE_ETH_RX_OFFLOAD_RSS_HASH);
 }
 
 static int
@@ -1966,12 +1966,12 @@ static uint64_t fm10k_get_tx_port_offloads_capa(struct rte_eth_dev *dev)
 {
 	RTE_SET_USED(dev);
 
-	return (uint64_t)(DEV_TX_OFFLOAD_VLAN_INSERT |
-			  DEV_TX_OFFLOAD_MULTI_SEGS  |
-			  DEV_TX_OFFLOAD_IPV4_CKSUM  |
-			  DEV_TX_OFFLOAD_UDP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_CKSUM   |
-			  DEV_TX_OFFLOAD_TCP_TSO);
+	return (uint64_t)(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+			  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+			  RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+			  RTE_ETH_TX_OFFLOAD_TCP_TSO);
 }
 
 static int
@@ -2112,8 +2112,8 @@ fm10k_reta_update(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2161,8 +2161,8 @@ fm10k_reta_query(struct rte_eth_dev *dev,
 	 * 128-entries in 32 registers
 	 */
 	for (i = 0; i < FM10K_MAX_RSS_INDICES; i += CHARS_PER_UINT32) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				BIT_MASK_PER_UINT32);
 		if (mask == 0)
@@ -2199,15 +2199,15 @@ fm10k_rss_hash_update(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	mrqc = 0;
-	mrqc |= (hf & ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
-	mrqc |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
-	mrqc |= (hf & ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV4)              ? FM10K_MRQC_IPV4     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6)              ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_EX)           ? FM10K_MRQC_IPV6     : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? FM10K_MRQC_TCP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? FM10K_MRQC_TCP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? FM10K_MRQC_UDP_IPV4 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? FM10K_MRQC_UDP_IPV6 : 0;
+	mrqc |= (hf & RTE_ETH_RSS_IPV6_UDP_EX)       ? FM10K_MRQC_UDP_IPV6 : 0;
 
 	/* If the mapping doesn't fit any supported, return */
 	if (mrqc == 0)
@@ -2244,15 +2244,15 @@ fm10k_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	mrqc = FM10K_READ_REG(hw, FM10K_MRQC(0));
 	hf = 0;
-	hf |= (mrqc & FM10K_MRQC_IPV4)     ? ETH_RSS_IPV4              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6              : 0;
-	hf |= (mrqc & FM10K_MRQC_IPV6)     ? ETH_RSS_IPV6_EX           : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? ETH_RSS_NONFRAG_IPV4_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_NONFRAG_IPV6_TCP  : 0;
-	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? ETH_RSS_IPV6_TCP_EX       : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? ETH_RSS_NONFRAG_IPV4_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_NONFRAG_IPV6_UDP  : 0;
-	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? ETH_RSS_IPV6_UDP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV4)     ? RTE_ETH_RSS_IPV4              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6              : 0;
+	hf |= (mrqc & FM10K_MRQC_IPV6)     ? RTE_ETH_RSS_IPV6_EX           : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_TCP  : 0;
+	hf |= (mrqc & FM10K_MRQC_TCP_IPV6) ? RTE_ETH_RSS_IPV6_TCP_EX       : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV4) ? RTE_ETH_RSS_NONFRAG_IPV4_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_NONFRAG_IPV6_UDP  : 0;
+	hf |= (mrqc & FM10K_MRQC_UDP_IPV6) ? RTE_ETH_RSS_IPV6_UDP_EX       : 0;
 
 	rss_conf->rss_hf = hf;
 
@@ -2607,7 +2607,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 
 			/* first clear the internal SW recording structure */
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					false);
 
@@ -2623,7 +2623,7 @@ fm10k_dev_interrupt_handler_pf(void *param)
 					MAIN_VSI_POOL_NUMBER);
 
 			if (!(dev->data->dev_conf.rxmode.mq_mode &
-						ETH_MQ_RX_VMDQ_FLAG))
+						RTE_ETH_MQ_RX_VMDQ_FLAG))
 				fm10k_vlan_filter_set(dev, hw->mac.default_vid,
 					true);
 
diff --git a/drivers/net/fm10k/fm10k_rxtx_vec.c b/drivers/net/fm10k/fm10k_rxtx_vec.c
index 83af01dc2da6..50973a662c67 100644
--- a/drivers/net/fm10k/fm10k_rxtx_vec.c
+++ b/drivers/net/fm10k/fm10k_rxtx_vec.c
@@ -208,11 +208,11 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 #ifndef RTE_FM10K_RX_OLFLAGS_ENABLE
 	/* whithout rx ol_flags, no VP flag report */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 #endif
 
@@ -221,7 +221,7 @@ fm10k_rx_vec_condition_check(struct rte_eth_dev *dev)
 		return -1;
 
 	/* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	return 0;
diff --git a/drivers/net/hinic/base/hinic_pmd_hwdev.c b/drivers/net/hinic/base/hinic_pmd_hwdev.c
index cb9cf6efa287..80f9eb5c3031 100644
--- a/drivers/net/hinic/base/hinic_pmd_hwdev.c
+++ b/drivers/net/hinic/base/hinic_pmd_hwdev.c
@@ -1320,28 +1320,28 @@ hinic_cable_status_event(u8 cmd, void *buf_in, __rte_unused u16 in_size,
 static int hinic_link_event_process(struct hinic_hwdev *hwdev,
 				    struct rte_eth_dev *eth_dev, u8 status)
 {
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 	struct nic_port_info port_info;
 	struct rte_eth_link link;
 	int rc = HINIC_OK;
 
 	if (!status) {
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
 		memset(&port_info, 0, sizeof(port_info));
 		rc = hinic_get_port_info(hwdev, &port_info);
 		if (rc) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
-			link.link_autoneg = ETH_LINK_FIXED;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+			link.link_autoneg = RTE_ETH_LINK_FIXED;
 		} else {
 			link.link_speed = port_speed[port_info.speed %
 						LINK_SPEED_MAX];
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 1a7240154668..105c0f48a616 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -311,8 +311,8 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* mtu size is 256~9600 */
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len < HINIC_MIN_FRAME_SIZE ||
@@ -338,7 +338,7 @@ static int hinic_dev_configure(struct rte_eth_dev *dev)
 
 	/* init vlan offoad */
 	err = hinic_vlan_offload_set(dev,
-				ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+				RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Initialize vlan filter and strip failed");
 		(void)hinic_config_mq_mode(dev, FALSE);
@@ -696,15 +696,15 @@ static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
 	} else {
 		*speed_capa = 0;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_1G))
-			*speed_capa |= ETH_LINK_SPEED_1G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_1G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_10G))
-			*speed_capa |= ETH_LINK_SPEED_10G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_10G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_25G))
-			*speed_capa |= ETH_LINK_SPEED_25G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_25G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_40G))
-			*speed_capa |= ETH_LINK_SPEED_40G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_40G;
 		if (!!(supported_link & HINIC_LINK_MODE_SUPPORT_100G))
-			*speed_capa |= ETH_LINK_SPEED_100G;
+			*speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	}
 }
 
@@ -732,25 +732,25 @@ hinic_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 
 	hinic_get_speed_capa(dev, &info->speed_capa);
 	info->rx_queue_offload_capa = 0;
-	info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_UDP_CKSUM |
-				DEV_RX_OFFLOAD_TCP_CKSUM |
-				DEV_RX_OFFLOAD_VLAN_FILTER |
-				DEV_RX_OFFLOAD_SCATTER |
-				DEV_RX_OFFLOAD_JUMBO_FRAME |
-				DEV_RX_OFFLOAD_TCP_LRO |
-				DEV_RX_OFFLOAD_RSS_HASH;
+	info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				RTE_ETH_RX_OFFLOAD_SCATTER |
+				RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+				RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	info->tx_queue_offload_capa = 0;
-	info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_UDP_CKSUM |
-				DEV_TX_OFFLOAD_TCP_CKSUM |
-				DEV_TX_OFFLOAD_SCTP_CKSUM |
-				DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				DEV_TX_OFFLOAD_TCP_TSO |
-				DEV_TX_OFFLOAD_MULTI_SEGS;
+	info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	info->hash_key_size = HINIC_RSS_KEY_SIZE;
 	info->reta_size = HINIC_RSS_INDIR_SIZE;
@@ -847,20 +847,20 @@ static int hinic_priv_get_dev_link_status(struct hinic_nic_dev *nic_dev,
 	u8 port_link_status = 0;
 	struct nic_port_info port_link_info;
 	struct hinic_hwdev *nic_hwdev = nic_dev->hwdev;
-	uint32_t port_speed[LINK_SPEED_MAX] = {ETH_SPEED_NUM_10M,
-					ETH_SPEED_NUM_100M, ETH_SPEED_NUM_1G,
-					ETH_SPEED_NUM_10G, ETH_SPEED_NUM_25G,
-					ETH_SPEED_NUM_40G, ETH_SPEED_NUM_100G};
+	uint32_t port_speed[LINK_SPEED_MAX] = {RTE_ETH_SPEED_NUM_10M,
+					RTE_ETH_SPEED_NUM_100M, RTE_ETH_SPEED_NUM_1G,
+					RTE_ETH_SPEED_NUM_10G, RTE_ETH_SPEED_NUM_25G,
+					RTE_ETH_SPEED_NUM_40G, RTE_ETH_SPEED_NUM_100G};
 
 	rc = hinic_get_link_status(nic_hwdev, &port_link_status);
 	if (rc)
 		return rc;
 
 	if (!port_link_status) {
-		link->link_status = ETH_LINK_DOWN;
+		link->link_status = RTE_ETH_LINK_DOWN;
 		link->link_speed = 0;
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
-		link->link_autoneg = ETH_LINK_FIXED;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link->link_autoneg = RTE_ETH_LINK_FIXED;
 		return HINIC_OK;
 	}
 
@@ -902,8 +902,8 @@ static int hinic_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* Get link status information from hardware */
 		rc = hinic_priv_get_dev_link_status(nic_dev, &link);
 		if (rc != HINIC_OK) {
-			link.link_speed = ETH_SPEED_NUM_NONE;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Get link status failed");
 			goto out;
 		}
@@ -1552,10 +1552,10 @@ static int hinic_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 	frame_size = HINIC_MTU_TO_PKTLEN(mtu);
 	if (frame_size > HINIC_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 	nic_dev->mtu_size = mtu;
@@ -1664,8 +1664,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	int err;
 
 	/* Enable or disable VLAN filter */
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) ?
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) ?
 			TRUE : FALSE;
 		err = hinic_config_vlan_filter(nic_dev->hwdev, on);
 		if (err == HINIC_MGMT_CMD_UNSUPPORTED) {
@@ -1686,8 +1686,8 @@ static int hinic_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Enable or disable VLAN stripping */
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		on = (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) ?
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		on = (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) ?
 			TRUE : FALSE;
 		err = hinic_set_rx_vlan_offload(nic_dev->hwdev, on);
 		if (err) {
@@ -1873,13 +1873,13 @@ static int hinic_flow_ctrl_get(struct rte_eth_dev *dev,
 	fc_conf->autoneg = nic_pause.auto_neg;
 
 	if (nic_pause.tx_pause && nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (nic_pause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else if (nic_pause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -1893,14 +1893,14 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
 
 	nic_pause.auto_neg = fc_conf->autoneg;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		nic_pause.tx_pause = true;
 	else
 		nic_pause.tx_pause = false;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-		(fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+		(fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		nic_pause.rx_pause = true;
 	else
 		nic_pause.rx_pause = false;
@@ -1944,7 +1944,7 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err = 0;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_OK;
 	}
@@ -1965,14 +1965,14 @@ static int hinic_rss_hash_update(struct rte_eth_dev *dev,
 		}
 	}
 
-	rss_type.ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type.tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type.ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type.ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type.tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type.tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type.udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type.udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type.ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type.tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type.ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type.ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type.tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type.tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type.udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type.udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 
 	err = hinic_set_rss_type(nic_dev->hwdev, tmpl_idx, rss_type);
 	if (err) {
@@ -2008,7 +2008,7 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 	struct nic_rss_type rss_type = {0};
 	int err;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		PMD_DRV_LOG(WARNING, "RSS is not enabled");
 		return HINIC_ERROR;
 	}
@@ -2029,15 +2029,15 @@ static int hinic_rss_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	rss_conf->rss_hf |=  rss_type.ipv4 ?
-		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4) : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? ETH_RSS_NONFRAG_IPV4_TCP : 0;
+		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4) : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_TCP : 0;
 	rss_conf->rss_hf |=  rss_type.ipv6 ?
-		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6) : 0;
-	rss_conf->rss_hf |=  rss_type.ipv6_ext ? ETH_RSS_IPV6_EX : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? ETH_RSS_NONFRAG_IPV6_TCP : 0;
-	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? ETH_RSS_IPV6_TCP_EX : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? ETH_RSS_NONFRAG_IPV4_UDP : 0;
-	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? ETH_RSS_NONFRAG_IPV6_UDP : 0;
+		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6) : 0;
+	rss_conf->rss_hf |=  rss_type.ipv6_ext ? RTE_ETH_RSS_IPV6_EX : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_TCP : 0;
+	rss_conf->rss_hf |=  rss_type.tcp_ipv6_ext ? RTE_ETH_RSS_IPV6_TCP_EX : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv4 ? RTE_ETH_RSS_NONFRAG_IPV4_UDP : 0;
+	rss_conf->rss_hf |=  rss_type.udp_ipv6 ? RTE_ETH_RSS_NONFRAG_IPV6_UDP : 0;
 
 	return HINIC_OK;
 }
@@ -2067,7 +2067,7 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 	u16 i = 0;
 	u16 idx, shift;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG))
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG))
 		return HINIC_OK;
 
 	if (reta_size != NIC_RSS_INDIR_SIZE) {
@@ -2081,8 +2081,8 @@ static int hinic_rss_indirtbl_update(struct rte_eth_dev *dev,
 
 	/* update rss indir_tbl */
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 
 		if (reta_conf[idx].reta[shift] >= nic_dev->num_rq) {
 			PMD_DRV_LOG(ERR, "Invalid reta entry, indirtbl[%d]: %d "
@@ -2147,8 +2147,8 @@ static int hinic_rss_indirtbl_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = (uint16_t)indirtbl[i];
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.c b/drivers/net/hinic/hinic_pmd_rx.c
index 842399cc4cd8..d347afe9a6a9 100644
--- a/drivers/net/hinic/hinic_pmd_rx.c
+++ b/drivers/net/hinic/hinic_pmd_rx.c
@@ -504,14 +504,14 @@ static void hinic_fill_rss_type(struct nic_rss_type *rss_type,
 {
 	u64 rss_hf = rss_conf->rss_hf;
 
-	rss_type->ipv4 = (rss_hf & (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4)) ? 1 : 0;
-	rss_type->tcp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
-	rss_type->ipv6 = (rss_hf & (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6)) ? 1 : 0;
-	rss_type->ipv6_ext = (rss_hf & ETH_RSS_IPV6_EX) ? 1 : 0;
-	rss_type->tcp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
-	rss_type->tcp_ipv6_ext = (rss_hf & ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
-	rss_type->udp_ipv4 = (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
-	rss_type->udp_ipv6 = (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
+	rss_type->ipv4 = (rss_hf & (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4)) ? 1 : 0;
+	rss_type->tcp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) ? 1 : 0;
+	rss_type->ipv6 = (rss_hf & (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6)) ? 1 : 0;
+	rss_type->ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_EX) ? 1 : 0;
+	rss_type->tcp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) ? 1 : 0;
+	rss_type->tcp_ipv6_ext = (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) ? 1 : 0;
+	rss_type->udp_ipv4 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) ? 1 : 0;
+	rss_type->udp_ipv6 = (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) ? 1 : 0;
 }
 
 static void hinic_fillout_indir_tbl(struct hinic_nic_dev *nic_dev, u32 *indir)
@@ -588,8 +588,8 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 {
 	int err, i;
 
-	if (!(nic_dev->flags & ETH_MQ_RX_RSS_FLAG)) {
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+	if (!(nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG)) {
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 		nic_dev->num_rss = 0;
 		if (nic_dev->num_rq > 1) {
 			/* get rss template id */
@@ -599,7 +599,7 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 				PMD_DRV_LOG(WARNING, "Alloc rss template failed");
 				return err;
 			}
-			nic_dev->flags |= ETH_MQ_RX_RSS_FLAG;
+			nic_dev->flags |= RTE_ETH_MQ_RX_RSS_FLAG;
 			for (i = 0; i < nic_dev->num_rq; i++)
 				hinic_add_rq_to_rx_queue_list(nic_dev, i);
 		}
@@ -610,12 +610,12 @@ static int hinic_setup_num_qps(struct hinic_nic_dev *nic_dev)
 
 static void hinic_destroy_num_qps(struct hinic_nic_dev *nic_dev)
 {
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (hinic_rss_template_free(nic_dev->hwdev,
 					    nic_dev->rss_tmpl_idx))
 			PMD_DRV_LOG(WARNING, "Free rss template failed");
 
-		nic_dev->flags &= ~ETH_MQ_RX_RSS_FLAG;
+		nic_dev->flags &= ~RTE_ETH_MQ_RX_RSS_FLAG;
 	}
 }
 
@@ -641,7 +641,7 @@ int hinic_config_mq_mode(struct rte_eth_dev *dev, bool on)
 	int ret = 0;
 
 	switch (dev_conf->rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		ret = hinic_config_mq_rx_rss(nic_dev, on);
 		break;
 	default:
@@ -662,7 +662,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	int lro_wqe_num;
 	int buf_size;
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		if (rss_conf.rss_hf == 0) {
 			rss_conf.rss_hf = HINIC_RSS_OFFLOAD_ALL;
 		} else if ((rss_conf.rss_hf & HINIC_RSS_OFFLOAD_ALL) == 0) {
@@ -678,7 +678,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic_dev->rx_csum_en = HINIC_RX_CSUM_OFFLOAD_EN;
 
 	err = hinic_set_rx_csum_offload(nic_dev->hwdev,
@@ -687,7 +687,7 @@ int hinic_rx_configure(struct rte_eth_dev *dev)
 		goto rx_csum_ofl_err;
 
 	/* config lro */
-	lro_en = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ?
+	lro_en = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ?
 			true : false;
 	max_lro_size = dev->data->dev_conf.rxmode.max_lro_pkt_size;
 	buf_size = nic_dev->hwdev->nic_io->rq_buf_size;
@@ -726,7 +726,7 @@ void hinic_rx_remove_configure(struct rte_eth_dev *dev)
 {
 	struct hinic_nic_dev *nic_dev = HINIC_ETH_DEV_TO_PRIVATE_NIC_DEV(dev);
 
-	if (nic_dev->flags & ETH_MQ_RX_RSS_FLAG) {
+	if (nic_dev->flags & RTE_ETH_MQ_RX_RSS_FLAG) {
 		hinic_rss_deinit(nic_dev);
 		hinic_destroy_num_qps(nic_dev);
 	}
diff --git a/drivers/net/hinic/hinic_pmd_rx.h b/drivers/net/hinic/hinic_pmd_rx.h
index 8a45f2d9fc50..5c303398b635 100644
--- a/drivers/net/hinic/hinic_pmd_rx.h
+++ b/drivers/net/hinic/hinic_pmd_rx.h
@@ -8,17 +8,17 @@
 #define HINIC_DEFAULT_RX_FREE_THRESH	32
 
 #define HINIC_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |\
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |\
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 enum rq_completion_fmt {
 	RQ_COMPLETE_SGE = 1
diff --git a/drivers/net/hns3/hns3_dcb.c b/drivers/net/hns3/hns3_dcb.c
index b71e2e9ea451..953c146d0200 100644
--- a/drivers/net/hns3/hns3_dcb.c
+++ b/drivers/net/hns3/hns3_dcb.c
@@ -1536,7 +1536,7 @@ hns3_dcb_hw_configure(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	if (hw->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (hw->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_rx_conf = &hw->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 		if (dcb_rx_conf->nb_tcs == 0)
 			hw->dcb_info.pfc_en = 1; /* tc0 only */
@@ -1693,7 +1693,7 @@ hns3_update_queue_map_configure(struct hns3_adapter *hns)
 	uint16_t nb_tx_q = hw->data->nb_tx_queues;
 	int ret;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		return 0;
 
 	ret = hns3_dcb_update_tc_queue_mapping(hw, nb_rx_q, nb_tx_q);
@@ -1713,22 +1713,22 @@ static void
 hns3_get_fc_mode(struct hns3_hw *hw, enum rte_eth_fc_mode mode)
 {
 	switch (mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_RX_PAUSE;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->requested_fc_mode = HNS3_FC_TX_PAUSE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->requested_fc_mode = HNS3_FC_FULL;
 		break;
 	default:
 		hw->requested_fc_mode = HNS3_FC_NONE;
 		hns3_warn(hw, "fc_mode(%u) exceeds member scope and is "
-			  "configured to RTE_FC_NONE", mode);
+			  "configured to RTE_ETH_FC_NONE", mode);
 		break;
 	}
 }
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 7d37004972bf..64d1da09a707 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -60,29 +60,29 @@ enum hns3_evt_cause {
 };
 
 static const struct rte_eth_fec_capa speed_fec_capa_tbl[] = {
-	{ ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_10G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_25G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_40G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) },
 
-	{ ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_50G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(BASER) |
 			     RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_100G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) },
 
-	{ ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
+	{ RTE_ETH_SPEED_NUM_200G, RTE_ETH_FEC_MODE_CAPA_MASK(NOFEC) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(AUTO) |
 			      RTE_ETH_FEC_MODE_CAPA_MASK(RS) }
 };
@@ -500,8 +500,8 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	struct hns3_cmd_desc desc;
 	int ret;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER)) {
 		hns3_err(hw, "Unsupported vlan type, vlan_type =%d", vlan_type);
 		return -EINVAL;
 	}
@@ -514,10 +514,10 @@ hns3_vlan_tpid_configure(struct hns3_adapter *hns, enum rte_vlan_type vlan_type,
 	hns3_cmd_setup_basic_desc(&desc, HNS3_OPC_MAC_VLAN_TYPE_ID, false);
 	rx_req = (struct hns3_rx_vlan_type_cfg_cmd *)desc.data;
 
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
-	} else if (vlan_type == ETH_VLAN_TYPE_INNER) {
+	} else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER) {
 		rx_req->ot_fst_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->ot_sec_vlan_type = rte_cpu_to_le_16(tpid);
 		rx_req->in_fst_vlan_type = rte_cpu_to_le_16(tpid);
@@ -725,11 +725,11 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	rte_spinlock_lock(&hw->lock);
 	rxmode = &dev->data->dev_conf.rxmode;
 	tmp_mask = (unsigned int)mask;
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* ignore vlan filter configuration during promiscuous mode */
 		if (!dev->data->promiscuous) {
 			/* Enable or disable VLAN filter */
-			enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER ?
+			enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ?
 				 true : false;
 
 			ret = hns3_enable_vlan_filter(hns, enable);
@@ -742,9 +742,9 @@ hns3_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		enable = rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP ?
+		enable = rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ?
 		    true : false;
 
 		ret = hns3_en_hw_strip_rxvtag(hns, enable);
@@ -1118,7 +1118,7 @@ hns3_init_vlan_config(struct hns3_adapter *hns)
 		return ret;
 	}
 
-	ret = hns3_vlan_tpid_configure(hns, ETH_VLAN_TYPE_INNER,
+	ret = hns3_vlan_tpid_configure(hns, RTE_ETH_VLAN_TYPE_INNER,
 				       RTE_ETHER_TYPE_VLAN);
 	if (ret) {
 		hns3_err(hw, "tpid set fail in pf, ret =%d", ret);
@@ -1161,7 +1161,7 @@ hns3_restore_vlan_conf(struct hns3_adapter *hns)
 	if (!hw->data->promiscuous) {
 		/* restore vlan filter states */
 		offloads = hw->data->dev_conf.rxmode.offloads;
-		enable = offloads & DEV_RX_OFFLOAD_VLAN_FILTER ? true : false;
+		enable = offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER ? true : false;
 		ret = hns3_enable_vlan_filter(hns, enable);
 		if (ret) {
 			hns3_err(hw, "failed to restore vlan rx filter conf, "
@@ -1204,7 +1204,7 @@ hns3_dev_configure_vlan(struct rte_eth_dev *dev)
 			  txmode->hw_vlan_reject_untagged);
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	ret = hns3_vlan_offload_set(dev, mask);
 	if (ret) {
 		hns3_err(hw, "dev config rx vlan offload failed, ret = %d",
@@ -2218,9 +2218,9 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 	int max_tc = 0;
 	int i;
 
-	if ((rx_mq_mode & ETH_MQ_RX_VMDQ_FLAG) ||
-	    (tx_mq_mode == ETH_MQ_TX_VMDQ_DCB ||
-	     tx_mq_mode == ETH_MQ_TX_VMDQ_ONLY)) {
+	if ((rx_mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) ||
+	    (tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB ||
+	     tx_mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)) {
 		hns3_err(hw, "VMDQ is not supported, rx_mq_mode = %d, tx_mq_mode = %d.",
 			 rx_mq_mode, tx_mq_mode);
 		return -EOPNOTSUPP;
@@ -2228,7 +2228,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 	dcb_rx_conf = &dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
 	dcb_tx_conf = &dev->data->dev_conf.tx_adv_conf.dcb_tx_conf;
-	if (rx_mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (rx_mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		if (dcb_rx_conf->nb_tcs > pf->tc_max) {
 			hns3_err(hw, "nb_tcs(%u) > max_tc(%u) driver supported.",
 				 dcb_rx_conf->nb_tcs, pf->tc_max);
@@ -2237,7 +2237,7 @@ hns3_check_mq_mode(struct rte_eth_dev *dev)
 
 		if (!(dcb_rx_conf->nb_tcs == HNS3_4_TCS ||
 		      dcb_rx_conf->nb_tcs == HNS3_8_TCS)) {
-			hns3_err(hw, "on ETH_MQ_RX_DCB_RSS mode, "
+			hns3_err(hw, "on RTE_ETH_MQ_RX_DCB_RSS mode, "
 				 "nb_tcs(%d) != %d or %d in rx direction.",
 				 dcb_rx_conf->nb_tcs, HNS3_4_TCS, HNS3_8_TCS);
 			return -EINVAL;
@@ -2380,7 +2380,7 @@ hns3_refresh_mtu(struct rte_eth_dev *dev, struct rte_eth_conf *conf)
 	uint16_t mtu;
 	int ret;
 
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME))
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME))
 		return 0;
 
 	/*
@@ -2440,11 +2440,11 @@ hns3_check_link_speed(struct hns3_hw *hw, uint32_t link_speeds)
 	 * configure link_speeds (default 0), which means auto-negotiation.
 	 * In this case, it should return success.
 	 */
-	if (link_speeds == ETH_LINK_SPEED_AUTONEG &&
+	if (link_speeds == RTE_ETH_LINK_SPEED_AUTONEG &&
 	    hw->mac.support_autoneg == 0)
 		return 0;
 
-	if (link_speeds != ETH_LINK_SPEED_AUTONEG) {
+	if (link_speeds != RTE_ETH_LINK_SPEED_AUTONEG) {
 		ret = hns3_check_port_speed(hw, link_speeds);
 		if (ret)
 			return ret;
@@ -2504,15 +2504,15 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 	if (ret)
 		goto cfg_err;
 
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = hns3_setup_dcb(dev);
 		if (ret)
 			goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		hw->rss_dis_flag = false;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -2533,7 +2533,7 @@ hns3_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -2633,10 +2633,10 @@ hns3_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (is_jumbo_frame)
 		dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 	rte_spinlock_unlock(&hw->lock);
 
@@ -2649,15 +2649,15 @@ hns3_get_copper_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_10M_BIT)
-		speed_capa |= ETH_LINK_SPEED_10M;
+		speed_capa |= RTE_ETH_LINK_SPEED_10M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_HD_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M_HD;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M_HD;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_100M_BIT)
-		speed_capa |= ETH_LINK_SPEED_100M;
+		speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	if (supported_speed & HNS3_PHY_LINK_SPEED_1000M_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 
 	return speed_capa;
 }
@@ -2668,19 +2668,19 @@ hns3_get_firber_port_speed_capa(uint32_t supported_speed)
 	uint32_t speed_capa = 0;
 
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_1G_BIT)
-		speed_capa |= ETH_LINK_SPEED_1G;
+		speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_10G_BIT)
-		speed_capa |= ETH_LINK_SPEED_10G;
+		speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_25G_BIT)
-		speed_capa |= ETH_LINK_SPEED_25G;
+		speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_40G_BIT)
-		speed_capa |= ETH_LINK_SPEED_40G;
+		speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_50G_BIT)
-		speed_capa |= ETH_LINK_SPEED_50G;
+		speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_100G_BIT)
-		speed_capa |= ETH_LINK_SPEED_100G;
+		speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (supported_speed & HNS3_FIBER_LINK_SPEED_200G_BIT)
-		speed_capa |= ETH_LINK_SPEED_200G;
+		speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	return speed_capa;
 }
@@ -2699,7 +2699,7 @@ hns3_get_speed_capa(struct hns3_hw *hw)
 			hns3_get_firber_port_speed_capa(mac->supported_speed);
 
 	if (mac->support_autoneg == 0)
-		speed_capa |= ETH_LINK_SPEED_FIXED;
+		speed_capa |= RTE_ETH_LINK_SPEED_FIXED;
 
 	return speed_capa;
 }
@@ -2725,41 +2725,41 @@ hns3_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mac_addrs = HNS3_UC_MACADDR_NUM;
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_KEEP_CRC |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_JUMBO_FRAME |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_outer_udp_cksum_supported(hw))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_indep_txrx_supported(hw))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
 				 RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
 
 	if (hns3_dev_ptp_supported(hw))
-		info->rx_offload_capa |= DEV_RX_OFFLOAD_TIMESTAMP;
+		info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	info->rx_desc_lim = (struct rte_eth_desc_lim) {
 		.nb_max = HNS3_MAX_RING_DESC,
@@ -2843,7 +2843,7 @@ hns3_update_port_link_info(struct rte_eth_dev *eth_dev)
 
 	ret = hns3_update_link_info(eth_dev);
 	if (ret)
-		hw->mac.link_status = ETH_LINK_DOWN;
+		hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	return ret;
 }
@@ -2856,29 +2856,29 @@ hns3_setup_linkstatus(struct rte_eth_dev *eth_dev,
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link->link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link->link_speed = ETH_SPEED_NUM_NONE;
+		new_link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link->link_duplex = mac->link_duplex;
-	new_link->link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link->link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link->link_autoneg = mac->link_autoneg;
 }
 
@@ -2898,8 +2898,8 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 	if (eth_dev->data->dev_started == 0) {
 		new_link.link_autoneg = mac->link_autoneg;
 		new_link.link_duplex = mac->link_duplex;
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
-		new_link.link_status = ETH_LINK_DOWN;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		new_link.link_status = RTE_ETH_LINK_DOWN;
 		goto out;
 	}
 
@@ -2911,7 +2911,7 @@ hns3_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
 			break;
 		}
 
-		if (!wait_to_complete || mac->link_status == ETH_LINK_UP)
+		if (!wait_to_complete || mac->link_status == RTE_ETH_LINK_UP)
 			break;
 
 		rte_delay_ms(HNS3_LINK_CHECK_INTERVAL);
@@ -3257,31 +3257,31 @@ hns3_parse_speed(int speed_cmd, uint32_t *speed)
 {
 	switch (speed_cmd) {
 	case HNS3_CFG_SPEED_10M:
-		*speed = ETH_SPEED_NUM_10M;
+		*speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case HNS3_CFG_SPEED_100M:
-		*speed = ETH_SPEED_NUM_100M;
+		*speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case HNS3_CFG_SPEED_1G:
-		*speed = ETH_SPEED_NUM_1G;
+		*speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case HNS3_CFG_SPEED_10G:
-		*speed = ETH_SPEED_NUM_10G;
+		*speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case HNS3_CFG_SPEED_25G:
-		*speed = ETH_SPEED_NUM_25G;
+		*speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case HNS3_CFG_SPEED_40G:
-		*speed = ETH_SPEED_NUM_40G;
+		*speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case HNS3_CFG_SPEED_50G:
-		*speed = ETH_SPEED_NUM_50G;
+		*speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case HNS3_CFG_SPEED_100G:
-		*speed = ETH_SPEED_NUM_100G;
+		*speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case HNS3_CFG_SPEED_200G:
-		*speed = ETH_SPEED_NUM_200G;
+		*speed = RTE_ETH_SPEED_NUM_200G;
 		break;
 	default:
 		return -EINVAL;
@@ -3610,39 +3610,39 @@ hns3_cfg_mac_speed_dup_hw(struct hns3_hw *hw, uint32_t speed, uint8_t duplex)
 	hns3_set_bit(req->speed_dup, HNS3_CFG_DUPLEX_B, !!duplex ? 1 : 0);
 
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_10M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10M);
 		break;
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100M);
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_1G);
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_10G);
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_25G);
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_40G);
 		break;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_50G);
 		break;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_100G);
 		break;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		hns3_set_field(req->speed_dup, HNS3_CFG_SPEED_M,
 			       HNS3_CFG_SPEED_S, HNS3_CFG_SPEED_200G);
 		break;
@@ -4305,14 +4305,14 @@ hns3_mac_init(struct hns3_hw *hw)
 	int ret;
 
 	pf->support_sfp_query = true;
-	mac->link_duplex = ETH_LINK_FULL_DUPLEX;
+	mac->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	ret = hns3_cfg_mac_speed_dup_hw(hw, mac->link_speed, mac->link_duplex);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Config mac speed dup fail ret = %d", ret);
 		return ret;
 	}
 
-	mac->link_status = ETH_LINK_DOWN;
+	mac->link_status = RTE_ETH_LINK_DOWN;
 
 	return hns3_config_mtu(hw, pf->mps);
 }
@@ -4562,7 +4562,7 @@ hns3_dev_promiscuous_enable(struct rte_eth_dev *dev)
 	 * all packets coming in in the receiving direction.
 	 */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, false);
 		if (ret) {
 			hns3_err(hw, "failed to enable promiscuous mode due to "
@@ -4603,7 +4603,7 @@ hns3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	}
 	/* when promiscuous mode was disabled, restore the vlan filter status */
 	offloads = dev->data->dev_conf.rxmode.offloads;
-	if (offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = hns3_enable_vlan_filter(hns, true);
 		if (ret) {
 			hns3_err(hw, "failed to disable promiscuous mode due to"
@@ -4723,8 +4723,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 		mac_info->supported_speed =
 					rte_le_to_cpu_32(resp->supported_speed);
 		mac_info->support_autoneg = resp->autoneg_ability;
-		mac_info->link_autoneg = (resp->autoneg == 0) ? ETH_LINK_FIXED
-					: ETH_LINK_AUTONEG;
+		mac_info->link_autoneg = (resp->autoneg == 0) ? RTE_ETH_LINK_FIXED
+					: RTE_ETH_LINK_AUTONEG;
 	} else {
 		mac_info->query_type = HNS3_DEFAULT_QUERY;
 	}
@@ -4735,8 +4735,8 @@ hns3_get_sfp_info(struct hns3_hw *hw, struct hns3_mac *mac_info)
 static uint8_t
 hns3_check_speed_dup(uint8_t duplex, uint32_t speed)
 {
-	if (!(speed == ETH_SPEED_NUM_10M || speed == ETH_SPEED_NUM_100M))
-		duplex = ETH_LINK_FULL_DUPLEX;
+	if (!(speed == RTE_ETH_SPEED_NUM_10M || speed == RTE_ETH_SPEED_NUM_100M))
+		duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	return duplex;
 }
@@ -4786,7 +4786,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 		return ret;
 
 	/* Do nothing if no SFP */
-	if (mac_info.link_speed == ETH_SPEED_NUM_NONE)
+	if (mac_info.link_speed == RTE_ETH_SPEED_NUM_NONE)
 		return 0;
 
 	/*
@@ -4813,7 +4813,7 @@ hns3_update_fiber_link_info(struct hns3_hw *hw)
 
 	/* Config full duplex for SFP */
 	return hns3_cfg_mac_speed_dup(hw, mac_info.link_speed,
-				      ETH_LINK_FULL_DUPLEX);
+				      RTE_ETH_LINK_FULL_DUPLEX);
 }
 
 static void
@@ -4932,10 +4932,10 @@ hns3_cfg_mac_mode(struct hns3_hw *hw, bool enable)
 	hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_B, val);
 
 	/*
-	 * If DEV_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
+	 * If RTE_ETH_RX_OFFLOAD_KEEP_CRC offload is set, MAC will not strip CRC
 	 * when receiving frames. Otherwise, CRC will be stripped.
 	 */
-	if (hw->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (hw->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, 0);
 	else
 		hns3_set_bit(loop_en, HNS3_MAC_RX_FCS_STRIP_B, val);
@@ -4963,7 +4963,7 @@ hns3_get_mac_link_status(struct hns3_hw *hw)
 	ret = hns3_cmd_send(hw, &desc, 1);
 	if (ret) {
 		hns3_err(hw, "get link status cmd failed %d", ret);
-		return ETH_LINK_DOWN;
+		return RTE_ETH_LINK_DOWN;
 	}
 
 	req = (struct hns3_link_status_cmd *)desc.data;
@@ -5145,19 +5145,19 @@ hns3_set_firber_default_support_speed(struct hns3_hw *hw)
 	struct hns3_mac *mac = &hw->mac;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		return HNS3_FIBER_LINK_SPEED_1G_BIT;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		return HNS3_FIBER_LINK_SPEED_10G_BIT;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		return HNS3_FIBER_LINK_SPEED_25G_BIT;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		return HNS3_FIBER_LINK_SPEED_40G_BIT;
-	case ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_50G:
 		return HNS3_FIBER_LINK_SPEED_50G_BIT;
-	case ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_100G:
 		return HNS3_FIBER_LINK_SPEED_100G_BIT;
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_200G:
 		return HNS3_FIBER_LINK_SPEED_200G_BIT;
 	default:
 		hns3_warn(hw, "invalid speed %u Mbps.", mac->link_speed);
@@ -5395,20 +5395,20 @@ hns3_convert_link_speeds2bitmap_copper(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_10M:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_10M:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_BIT;
 		break;
-	case ETH_LINK_SPEED_10M_HD:
+	case RTE_ETH_LINK_SPEED_10M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_10M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_100M:
+	case RTE_ETH_LINK_SPEED_100M:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_BIT;
 		break;
-	case ETH_LINK_SPEED_100M_HD:
+	case RTE_ETH_LINK_SPEED_100M_HD:
 		speed_bit = HNS3_PHY_LINK_SPEED_100M_HD_BIT;
 		break;
-	case ETH_LINK_SPEED_1G:
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_PHY_LINK_SPEED_1000M_BIT;
 		break;
 	default:
@@ -5424,26 +5424,26 @@ hns3_convert_link_speeds2bitmap_fiber(uint32_t link_speeds)
 {
 	uint32_t speed_bit;
 
-	switch (link_speeds & ~ETH_LINK_SPEED_FIXED) {
-	case ETH_LINK_SPEED_1G:
+	switch (link_speeds & ~RTE_ETH_LINK_SPEED_FIXED) {
+	case RTE_ETH_LINK_SPEED_1G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_1G_BIT;
 		break;
-	case ETH_LINK_SPEED_10G:
+	case RTE_ETH_LINK_SPEED_10G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_10G_BIT;
 		break;
-	case ETH_LINK_SPEED_25G:
+	case RTE_ETH_LINK_SPEED_25G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_25G_BIT;
 		break;
-	case ETH_LINK_SPEED_40G:
+	case RTE_ETH_LINK_SPEED_40G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_40G_BIT;
 		break;
-	case ETH_LINK_SPEED_50G:
+	case RTE_ETH_LINK_SPEED_50G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_50G_BIT;
 		break;
-	case ETH_LINK_SPEED_100G:
+	case RTE_ETH_LINK_SPEED_100G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_100G_BIT;
 		break;
-	case ETH_LINK_SPEED_200G:
+	case RTE_ETH_LINK_SPEED_200G:
 		speed_bit = HNS3_FIBER_LINK_SPEED_200G_BIT;
 		break;
 	default:
@@ -5478,28 +5478,28 @@ hns3_check_port_speed(struct hns3_hw *hw, uint32_t link_speeds)
 static inline uint32_t
 hns3_get_link_speed(uint32_t link_speeds)
 {
-	uint32_t speed = ETH_SPEED_NUM_NONE;
-
-	if (link_speeds & ETH_LINK_SPEED_10M ||
-	    link_speeds & ETH_LINK_SPEED_10M_HD)
-		speed = ETH_SPEED_NUM_10M;
-	if (link_speeds & ETH_LINK_SPEED_100M ||
-	    link_speeds & ETH_LINK_SPEED_100M_HD)
-		speed = ETH_SPEED_NUM_100M;
-	if (link_speeds & ETH_LINK_SPEED_1G)
-		speed = ETH_SPEED_NUM_1G;
-	if (link_speeds & ETH_LINK_SPEED_10G)
-		speed = ETH_SPEED_NUM_10G;
-	if (link_speeds & ETH_LINK_SPEED_25G)
-		speed = ETH_SPEED_NUM_25G;
-	if (link_speeds & ETH_LINK_SPEED_40G)
-		speed = ETH_SPEED_NUM_40G;
-	if (link_speeds & ETH_LINK_SPEED_50G)
-		speed = ETH_SPEED_NUM_50G;
-	if (link_speeds & ETH_LINK_SPEED_100G)
-		speed = ETH_SPEED_NUM_100G;
-	if (link_speeds & ETH_LINK_SPEED_200G)
-		speed = ETH_SPEED_NUM_200G;
+	uint32_t speed = RTE_ETH_SPEED_NUM_NONE;
+
+	if (link_speeds & RTE_ETH_LINK_SPEED_10M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_10M_HD)
+		speed = RTE_ETH_SPEED_NUM_10M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M ||
+	    link_speeds & RTE_ETH_LINK_SPEED_100M_HD)
+		speed = RTE_ETH_SPEED_NUM_100M;
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
+		speed = RTE_ETH_SPEED_NUM_1G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
+		speed = RTE_ETH_SPEED_NUM_10G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
+		speed = RTE_ETH_SPEED_NUM_25G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
+		speed = RTE_ETH_SPEED_NUM_40G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_50G)
+		speed = RTE_ETH_SPEED_NUM_50G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
+		speed = RTE_ETH_SPEED_NUM_100G;
+	if (link_speeds & RTE_ETH_LINK_SPEED_200G)
+		speed = RTE_ETH_SPEED_NUM_200G;
 
 	return speed;
 }
@@ -5507,11 +5507,11 @@ hns3_get_link_speed(uint32_t link_speeds)
 static uint8_t
 hns3_get_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-	    (link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+	    (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 static int
@@ -5645,9 +5645,9 @@ hns3_apply_link_speed(struct hns3_hw *hw)
 	struct hns3_set_link_speed_cfg cfg;
 
 	memset(&cfg, 0, sizeof(struct hns3_set_link_speed_cfg));
-	cfg.autoneg = (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) ?
-			ETH_LINK_AUTONEG : ETH_LINK_FIXED;
-	if (cfg.autoneg != ETH_LINK_AUTONEG) {
+	cfg.autoneg = (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) ?
+			RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
+	if (cfg.autoneg != RTE_ETH_LINK_AUTONEG) {
 		cfg.speed = hns3_get_link_speed(conf->link_speeds);
 		cfg.duplex = hns3_get_link_duplex(conf->link_speeds);
 	}
@@ -5920,7 +5920,7 @@ hns3_do_stop(struct hns3_adapter *hns)
 	ret = hns3_cfg_mac_mode(hw, false);
 	if (ret)
 		return ret;
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	if (__atomic_load_n(&hw->reset.disable_cmd, __ATOMIC_RELAXED) == 0) {
 		hns3_configure_all_mac_addr(hns, true);
@@ -6131,17 +6131,17 @@ hns3_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	current_mode = hns3_get_current_fc_mode(dev);
 	switch (current_mode) {
 	case HNS3_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case HNS3_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case HNS3_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case HNS3_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	}
 
@@ -6287,7 +6287,7 @@ hns3_get_dcb_info(struct rte_eth_dev *dev, struct rte_eth_dcb_info *dcb_info)
 	int i;
 
 	rte_spinlock_lock(&hw->lock);
-	if ((uint32_t)mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = pf->local_max_tc;
 	else
 		dcb_info->nb_tcs = 1;
@@ -6587,7 +6587,7 @@ hns3_stop_service(struct hns3_adapter *hns)
 	struct rte_eth_dev *eth_dev;
 
 	eth_dev = &rte_eth_devices[hw->data->port_id];
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 	if (hw->adapter_state == HNS3_NIC_STARTED) {
 		rte_eal_alarm_cancel(hns3_service_handler, eth_dev);
 		hns3_update_linkstatus_and_event(hw, false);
@@ -6877,7 +6877,7 @@ get_current_fec_auto_state(struct hns3_hw *hw, uint8_t *state)
 	 * in device of link speed
 	 * below 10 Gbps.
 	 */
-	if (hw->mac.link_speed < ETH_SPEED_NUM_10G) {
+	if (hw->mac.link_speed < RTE_ETH_SPEED_NUM_10G) {
 		*state = 0;
 		return 0;
 	}
@@ -6909,7 +6909,7 @@ hns3_fec_get_internal(struct hns3_hw *hw, uint32_t *fec_capa)
 	 * configured FEC mode is returned.
 	 * If link is up, current FEC mode is returned.
 	 */
-	if (hw->mac.link_status == ETH_LINK_DOWN) {
+	if (hw->mac.link_status == RTE_ETH_LINK_DOWN) {
 		ret = get_current_fec_auto_state(hw, &auto_state);
 		if (ret)
 			return ret;
@@ -7008,12 +7008,12 @@ get_current_speed_fec_cap(struct hns3_hw *hw, struct rte_eth_fec_capa *fec_capa)
 	uint32_t cur_capa;
 
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		cur_capa = fec_capa[1].capa;
 		break;
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		cur_capa = fec_capa[0].capa;
 		break;
 	default:
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index 0e4e4269a12f..c40d28af1d46 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -191,10 +191,10 @@ struct hns3_mac {
 	bool default_addr_setted; /* whether default addr(mac_addr) is set */
 	uint8_t media_type;
 	uint8_t phy_addr;
-	uint8_t link_duplex  : 1; /* ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint8_t link_autoneg : 1; /* ETH_LINK_[AUTONEG/FIXED] */
-	uint8_t link_status  : 1; /* ETH_LINK_[DOWN/UP] */
-	uint32_t link_speed;      /* ETH_SPEED_NUM_ */
+	uint8_t link_duplex  : 1; /* RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint8_t link_autoneg : 1; /* RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint8_t link_status  : 1; /* RTE_ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;      /* RTE_ETH_SPEED_NUM_ */
 	/*
 	 * Some firmware versions support only the SFP speed query. In addition
 	 * to the SFP speed query, some firmware supports the query of the speed
@@ -1114,9 +1114,9 @@ static inline uint64_t
 hns3_txvlan_cap_get(struct hns3_hw *hw)
 {
 	if (hw->port_base_vlan_cfg.state)
-		return DEV_TX_OFFLOAD_VLAN_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	else
-		return DEV_TX_OFFLOAD_VLAN_INSERT | DEV_TX_OFFLOAD_QINQ_INSERT;
+		return RTE_ETH_TX_OFFLOAD_VLAN_INSERT | RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 }
 
 #endif /* _HNS3_ETHDEV_H_ */
diff --git a/drivers/net/hns3/hns3_ethdev_vf.c b/drivers/net/hns3/hns3_ethdev_vf.c
index 8d9b7979c806..53d79bb2106c 100644
--- a/drivers/net/hns3/hns3_ethdev_vf.c
+++ b/drivers/net/hns3/hns3_ethdev_vf.c
@@ -809,15 +809,15 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	hw->adapter_state = HNS3_NIC_CONFIGURING;
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		hns3_err(hw, "setting link speed/duplex not supported");
 		ret = -EINVAL;
 		goto cfg_err;
 	}
 
 	/* When RSS is not configured, redirect the packet queue 0 */
-	if ((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) {
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if ((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) {
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		hw->rss_dis_flag = false;
 		rss_conf = conf->rx_adv_conf.rss_conf;
 		ret = hns3_dev_rss_hash_update(dev, &rss_conf);
@@ -829,7 +829,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 	 * If jumbo frames are enabled, MTU needs to be refreshed
 	 * according to the maximum RX packet length.
 	 */
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		max_rx_pkt_len = conf->rxmode.max_rx_pkt_len;
 		if (max_rx_pkt_len > HNS3_MAX_FRAME_LEN ||
 		    max_rx_pkt_len <= HNS3_DEFAULT_FRAME_LEN) {
@@ -853,7 +853,7 @@ hns3vf_dev_configure(struct rte_eth_dev *dev)
 		goto cfg_err;
 
 	/* config hardware GRO */
-	gro_en = conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		goto cfg_err;
@@ -931,10 +931,10 @@ hns3vf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	}
 	if (mtu > RTE_ETHER_MTU)
 		dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-						~DEV_RX_OFFLOAD_JUMBO_FRAME;
+						~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 	rte_spinlock_unlock(&hw->lock);
 
@@ -963,33 +963,33 @@ hns3vf_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *info)
 	info->max_mtu = info->max_rx_pktlen - HNS3_ETH_OVERHEAD;
 	info->max_lro_pkt_size = HNS3_MAX_LRO_SIZE;
 
-	info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_TCP_CKSUM |
-				 DEV_RX_OFFLOAD_SCTP_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				 DEV_RX_OFFLOAD_SCATTER |
-				 DEV_RX_OFFLOAD_VLAN_STRIP |
-				 DEV_RX_OFFLOAD_VLAN_FILTER |
-				 DEV_RX_OFFLOAD_JUMBO_FRAME |
-				 DEV_RX_OFFLOAD_RSS_HASH |
-				 DEV_RX_OFFLOAD_TCP_LRO);
-	info->tx_offload_capa = (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_IPV4_CKSUM |
-				 DEV_TX_OFFLOAD_TCP_CKSUM |
-				 DEV_TX_OFFLOAD_UDP_CKSUM |
-				 DEV_TX_OFFLOAD_SCTP_CKSUM |
-				 DEV_TX_OFFLOAD_MULTI_SEGS |
-				 DEV_TX_OFFLOAD_TCP_TSO |
-				 DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				 DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				 DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-				 DEV_TX_OFFLOAD_MBUF_FAST_FREE |
+	info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				 RTE_ETH_RX_OFFLOAD_SCATTER |
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+				 RTE_ETH_RX_OFFLOAD_RSS_HASH |
+				 RTE_ETH_RX_OFFLOAD_TCP_LRO);
+	info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+				 RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				 RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				 RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
 				 hns3_txvlan_cap_get(hw));
 
 	if (hns3_dev_outer_udp_cksum_supported(hw))
-		info->tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+		info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 	if (hns3_dev_indep_txrx_supported(hw))
 		info->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -1669,10 +1669,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	tmp_mask = (unsigned int)mask;
 
-	if (tmp_mask & ETH_VLAN_FILTER_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_FILTER_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN filter */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = hns3vf_en_vlan_filter(hw, true);
 		else
 			ret = hns3vf_en_vlan_filter(hw, false);
@@ -1682,10 +1682,10 @@ hns3vf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	}
 
 	/* Vlan stripping setting */
-	if (tmp_mask & ETH_VLAN_STRIP_MASK) {
+	if (tmp_mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rte_spinlock_lock(&hw->lock);
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ret = hns3vf_en_hw_strip_rxvtag(hw, true);
 		else
 			ret = hns3vf_en_hw_strip_rxvtag(hw, false);
@@ -1753,7 +1753,7 @@ hns3vf_restore_vlan_conf(struct hns3_adapter *hns)
 	int ret;
 
 	dev_conf = &hw->data->dev_conf;
-	en = dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP ? true
+	en = dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP ? true
 								   : false;
 	ret = hns3vf_en_hw_strip_rxvtag(hw, en);
 	if (ret)
@@ -1778,8 +1778,8 @@ hns3vf_dev_configure_vlan(struct rte_eth_dev *dev)
 	}
 
 	/* Apply vlan offload setting */
-	ret = hns3vf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK);
+	ret = hns3vf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		hns3_err(hw, "dev config vlan offload failed, ret = %d.", ret);
 
@@ -2088,7 +2088,7 @@ hns3vf_do_stop(struct hns3_adapter *hns)
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	/*
 	 * The "hns3vf_do_stop" function will also be called by .stop_service to
@@ -2247,31 +2247,31 @@ hns3vf_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	memset(&new_link, 0, sizeof(new_link));
 	switch (mac->link_speed) {
-	case ETH_SPEED_NUM_10M:
-	case ETH_SPEED_NUM_100M:
-	case ETH_SPEED_NUM_1G:
-	case ETH_SPEED_NUM_10G:
-	case ETH_SPEED_NUM_25G:
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_50G:
-	case ETH_SPEED_NUM_100G:
-	case ETH_SPEED_NUM_200G:
+	case RTE_ETH_SPEED_NUM_10M:
+	case RTE_ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_50G:
+	case RTE_ETH_SPEED_NUM_100G:
+	case RTE_ETH_SPEED_NUM_200G:
 		if (mac->link_status)
 			new_link.link_speed = mac->link_speed;
 		break;
 	default:
 		if (mac->link_status)
-			new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	}
 
 	if (!mac->link_status)
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 
 	new_link.link_duplex = mac->link_duplex;
-	new_link.link_status = mac->link_status ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_status = mac->link_status ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg =
-	    !(eth_dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+	    !(eth_dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(eth_dev, &new_link);
 }
@@ -2599,11 +2599,11 @@ hns3vf_stop_service(struct hns3_adapter *hns)
 		 * Make sure call update link status before hns3vf_stop_poll_job
 		 * because update link status depend on polling job exist.
 		 */
-		hns3vf_update_link_status(hw, ETH_LINK_DOWN, hw->mac.link_speed,
+		hns3vf_update_link_status(hw, RTE_ETH_LINK_DOWN, hw->mac.link_speed,
 					  hw->mac.link_duplex);
 		hns3vf_stop_poll_job(eth_dev);
 	}
-	hw->mac.link_status = ETH_LINK_DOWN;
+	hw->mac.link_status = RTE_ETH_LINK_DOWN;
 
 	hns3_set_rxtx_function(eth_dev);
 	rte_wmb();
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index fc77979c5f14..0ac8705b590b 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1298,10 +1298,10 @@ hns3_rss_input_tuple_supported(struct hns3_hw *hw,
 	 * Kunpeng930 and future kunpeng series support to use src/dst port
 	 * fields to RSS hash for IPv6 SCTP packet type.
 	 */
-	if (rss->types & (ETH_RSS_L4_DST_ONLY | ETH_RSS_L4_SRC_ONLY) &&
-	    (rss->types & ETH_RSS_IP ||
+	if (rss->types & (RTE_ETH_RSS_L4_DST_ONLY | RTE_ETH_RSS_L4_SRC_ONLY) &&
+	    (rss->types & RTE_ETH_RSS_IP ||
 	    (!hw->rss_info.ipv6_sctp_offload_supported &&
-	    rss->types & ETH_RSS_NONFRAG_IPV6_SCTP)))
+	    rss->types & RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 		return false;
 
 	return true;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index df8485904688..395590c86c03 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -21,7 +21,7 @@ hns3_mbuf_dyn_rx_timestamp_register(struct rte_eth_dev *dev,
 	struct hns3_hw *hw = &hns->hw;
 	int ret;
 
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		return 0;
 
 	ret = rte_mbuf_dyn_rx_timestamp_register
diff --git a/drivers/net/hns3/hns3_rss.c b/drivers/net/hns3/hns3_rss.c
index 3a81e90e0911..85495bbe89d9 100644
--- a/drivers/net/hns3/hns3_rss.c
+++ b/drivers/net/hns3/hns3_rss.c
@@ -76,69 +76,69 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_tuple_table[] = {
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV4 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) },
-	{ ETH_RSS_FRAG_IPV6 | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP | ETH_RSS_L4_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP | RTE_ETH_RSS_L4_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_SRC_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_SRC_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER | ETH_RSS_L3_DST_ONLY,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER | RTE_ETH_RSS_L3_DST_ONLY,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) },
 };
 
@@ -146,44 +146,44 @@ static const struct {
 	uint64_t rss_types;
 	uint64_t rss_field;
 } hns3_set_rss_types[] = {
-	{ ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
+	{ RTE_ETH_RSS_FRAG_IPV4, BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_FRAG_IP_S) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV4_EN_NONFRAG_IP_D) },
-	{ ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
+	{ RTE_ETH_RSS_FRAG_IPV6, BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_FRAG_IP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_TCP_EN_TCP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_UDP_EN_UDP_D) },
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_IP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_D) |
 	  BIT_ULL(HNS3_RSS_FILED_IPV6_SCTP_EN_SCTP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_SCTP_EN_SCTP_VER) },
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_S) |
 	  BIT_ULL(HNS3_RSS_FIELD_IPV6_NONFRAG_IP_D) }
 };
@@ -365,10 +365,10 @@ hns3_set_rss_tuple_by_rss_hf(struct hns3_hw *hw,
 	 * When user does not specify the following types or a combination of
 	 * the following types, it enables all fields for the supported RSS
 	 * types. the following types as:
-	 * - ETH_RSS_L3_SRC_ONLY
-	 * - ETH_RSS_L3_DST_ONLY
-	 * - ETH_RSS_L4_SRC_ONLY
-	 * - ETH_RSS_L4_DST_ONLY
+	 * - RTE_ETH_RSS_L3_SRC_ONLY
+	 * - RTE_ETH_RSS_L3_DST_ONLY
+	 * - RTE_ETH_RSS_L4_SRC_ONLY
+	 * - RTE_ETH_RSS_L4_DST_ONLY
 	 */
 	if (fields_count == 0) {
 		for (i = 0; i < RTE_DIM(hns3_set_rss_types); i++) {
@@ -520,8 +520,8 @@ hns3_dev_rss_reta_update(struct rte_eth_dev *dev,
 	memcpy(indirection_tbl, rss_cfg->rss_indirection_tbl,
 	       sizeof(rss_cfg->rss_indirection_tbl));
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].reta[shift] >= hw->alloc_rss_size) {
 			rte_spinlock_unlock(&hw->lock);
 			hns3_err(hw, "queue id(%u) set to redirection table "
@@ -572,8 +572,8 @@ hns3_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	rte_spinlock_lock(&hw->lock);
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] =
 						rss_cfg->rss_indirection_tbl[i];
@@ -692,7 +692,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	}
 
 	/* When RSS is off, redirect the packet queue 0 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG) == 0)
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0)
 		hns3_rss_uninit(hns);
 
 	/* Configure RSS hash algorithm and hash key offset */
@@ -709,7 +709,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	 * When RSS is off, it doesn't need to configure rss redirection table
 	 * to hardware.
 	 */
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret = hns3_set_rss_indir_table(hw, rss_cfg->rss_indirection_tbl,
 					       hw->rss_ind_tbl_size);
 		if (ret)
@@ -723,7 +723,7 @@ hns3_config_rss(struct hns3_adapter *hns)
 	return ret;
 
 rss_indir_table_uninit:
-	if (((uint32_t)mq_mode & ETH_MQ_RX_RSS_FLAG)) {
+	if (((uint32_t)mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)) {
 		ret1 = hns3_rss_reset_indir_table(hw);
 		if (ret1 != 0)
 			return ret;
diff --git a/drivers/net/hns3/hns3_rss.h b/drivers/net/hns3/hns3_rss.h
index 996083b88b25..6f153a1b7bfb 100644
--- a/drivers/net/hns3/hns3_rss.h
+++ b/drivers/net/hns3/hns3_rss.h
@@ -8,20 +8,20 @@
 #include <rte_flow.h>
 
 #define HNS3_ETH_RSS_SUPPORT ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L3_SRC_ONLY | \
-	ETH_RSS_L3_DST_ONLY | \
-	ETH_RSS_L4_SRC_ONLY | \
-	ETH_RSS_L4_DST_ONLY)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L3_SRC_ONLY | \
+	RTE_ETH_RSS_L3_DST_ONLY | \
+	RTE_ETH_RSS_L4_SRC_ONLY | \
+	RTE_ETH_RSS_L4_DST_ONLY)
 
 #define HNS3_RSS_IND_TBL_SIZE	512 /* The size of hash lookup table */
 #define HNS3_RSS_IND_TBL_SIZE_MAX 2048
diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 0f222b37f9d1..01e43791572b 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -1912,7 +1912,7 @@ hns3_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t nb_desc,
 	memset(&rxq->dfx_stats, 0, sizeof(struct hns3_rx_dfx_stats));
 
 	/* CRC len set here is used for amending packet length */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -1957,7 +1957,7 @@ hns3_rx_scattered_calc(struct rte_eth_dev *dev)
 						 rxq->rx_buf_len);
 	}
 
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SCATTER ||
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 	    dev_conf->rxmode.max_rx_pkt_len > hw->rx_buf_len)
 		dev->data->scattered_rx = true;
 }
@@ -2833,7 +2833,7 @@ hns3_get_rx_function(struct rte_eth_dev *dev)
 	vec_allowed = vec_support && hns3_get_default_vec_support();
 	sve_allowed = vec_support && hns3_get_sve_support();
 	simple_allowed = !dev->data->scattered_rx &&
-			 (offloads & DEV_RX_OFFLOAD_TCP_LRO) == 0;
+			 (offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) == 0;
 
 	if (hns->rx_func_hint == HNS3_IO_FUNC_HINT_VEC && vec_allowed)
 		return hns3_recv_pkts_vec;
@@ -3127,7 +3127,7 @@ hns3_restore_gro_conf(struct hns3_hw *hw)
 	int ret;
 
 	offloads = hw->data->dev_conf.rxmode.offloads;
-	gro_en = offloads & DEV_RX_OFFLOAD_TCP_LRO ? true : false;
+	gro_en = offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO ? true : false;
 	ret = hns3_config_gro(hw, gro_en);
 	if (ret)
 		hns3_err(hw, "restore hardware GRO to %s failed, ret = %d",
@@ -4279,7 +4279,7 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
 	if (hns3_dev_ptp_supported(hw))
 		return false;
 
-	return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
+	return (offloads == (offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE));
 }
 
 static bool
@@ -4291,16 +4291,16 @@ hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
 	return true;
 #else
 #define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
-		DEV_TX_OFFLOAD_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_CKSUM | \
-		DEV_TX_OFFLOAD_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_SCTP_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
-		DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO | \
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)
 
 	uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
 	if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
diff --git a/drivers/net/hns3/hns3_rxtx.h b/drivers/net/hns3/hns3_rxtx.h
index cd7c21c1d0c8..2fa3a01dd3bf 100644
--- a/drivers/net/hns3/hns3_rxtx.h
+++ b/drivers/net/hns3/hns3_rxtx.h
@@ -307,7 +307,7 @@ struct hns3_rx_queue {
 	uint16_t rx_rearm_start; /* index of BD that driver re-arming from */
 	uint16_t rx_rearm_nb;    /* number of remaining BDs to be re-armed */
 
-	/* 4 if DEV_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
+	/* 4 if RTE_ETH_RX_OFFLOAD_KEEP_CRC offload set, 0 otherwise */
 	uint8_t crc_len;
 
 	/*
diff --git a/drivers/net/hns3/hns3_rxtx_vec.c b/drivers/net/hns3/hns3_rxtx_vec.c
index 844512f6ceec..d01a8d62bfb1 100644
--- a/drivers/net/hns3/hns3_rxtx_vec.c
+++ b/drivers/net/hns3/hns3_rxtx_vec.c
@@ -22,8 +22,8 @@ hns3_tx_check_vec_support(struct rte_eth_dev *dev)
 	if (hns3_dev_ptp_supported(hw))
 		return -ENOTSUP;
 
-	/* Only support DEV_TX_OFFLOAD_MBUF_FAST_FREE */
-	if (txmode->offloads != DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	/* Only support RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE */
+	if (txmode->offloads != RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		return -ENOTSUP;
 
 	return 0;
@@ -228,10 +228,10 @@ hns3_rxq_vec_check(struct hns3_rx_queue *rxq, void *arg)
 int
 hns3_rx_check_vec_support(struct rte_eth_dev *dev)
 {
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	uint64_t offloads_mask = DEV_RX_OFFLOAD_TCP_LRO |
-				 DEV_RX_OFFLOAD_VLAN;
+	uint64_t offloads_mask = RTE_ETH_RX_OFFLOAD_TCP_LRO |
+				 RTE_ETH_RX_OFFLOAD_VLAN;
 
 	struct hns3_hw *hw = HNS3_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	if (hns3_dev_ptp_supported(hw))
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index 7b230e2ed17a..c199a87c6df4 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -1641,7 +1641,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
 
 	/* Set the global registers with default ether type value */
 	if (!pf->support_multi_driver) {
-		ret = i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+		ret = i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					 RTE_ETHER_TYPE_VLAN);
 		if (ret != I40E_SUCCESS) {
 			PMD_INIT_LOG(ERR,
@@ -1909,8 +1909,8 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	ad->tx_simple_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Only legacy filter API needs the following fdir config. So when the
 	 * legacy filter API is deprecated, the following codes should also be
@@ -1944,13 +1944,13 @@ i40e_dev_configure(struct rte_eth_dev *dev)
 	 *  number, which will be available after rx_queue_setup(). dev_start()
 	 *  function is good to place RSS setup.
 	 */
-	if (mq_mode & ETH_MQ_RX_VMDQ_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG) {
 		ret = i40e_vmdq_setup(dev);
 		if (ret)
 			goto err;
 	}
 
-	if (mq_mode & ETH_MQ_RX_DCB_FLAG) {
+	if (mq_mode & RTE_ETH_MQ_RX_DCB_FLAG) {
 		ret = i40e_dcb_setup(dev);
 		if (ret) {
 			PMD_DRV_LOG(ERR, "failed to configure DCB.");
@@ -2227,17 +2227,17 @@ i40e_parse_link_speeds(uint16_t link_speeds)
 {
 	uint8_t link_speed = I40E_LINK_SPEED_UNKNOWN;
 
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed |= I40E_LINK_SPEED_40GB;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed |= I40E_LINK_SPEED_25GB;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed |= I40E_LINK_SPEED_20GB;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed |= I40E_LINK_SPEED_10GB;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed |= I40E_LINK_SPEED_1GB;
-	if (link_speeds & ETH_LINK_SPEED_100M)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100M)
 		link_speed |= I40E_LINK_SPEED_100MB;
 
 	return link_speed;
@@ -2345,13 +2345,13 @@ i40e_apply_link_speed(struct rte_eth_dev *dev)
 	abilities |= I40E_AQ_PHY_ENABLE_ATOMIC_LINK |
 		     I40E_AQ_PHY_LINK_ENABLED;
 
-	if (conf->link_speeds == ETH_LINK_SPEED_AUTONEG) {
-		conf->link_speeds = ETH_LINK_SPEED_40G |
-				    ETH_LINK_SPEED_25G |
-				    ETH_LINK_SPEED_20G |
-				    ETH_LINK_SPEED_10G |
-				    ETH_LINK_SPEED_1G |
-				    ETH_LINK_SPEED_100M;
+	if (conf->link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
+		conf->link_speeds = RTE_ETH_LINK_SPEED_40G |
+				    RTE_ETH_LINK_SPEED_25G |
+				    RTE_ETH_LINK_SPEED_20G |
+				    RTE_ETH_LINK_SPEED_10G |
+				    RTE_ETH_LINK_SPEED_1G |
+				    RTE_ETH_LINK_SPEED_100M;
 
 		abilities |= I40E_AQ_PHY_AN_ENABLED;
 	} else {
@@ -2910,34 +2910,34 @@ update_link_reg(struct i40e_hw *hw, struct rte_eth_link *link)
 	/* Parse the link status */
 	switch (link_speed) {
 	case I40E_REG_SPEED_0:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_REG_SPEED_1:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_REG_SPEED_2:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_2_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		else
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_REG_SPEED_3:
 		if (hw->mac.type == I40E_MAC_X722) {
-			link->link_speed = ETH_SPEED_NUM_5G;
+			link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		} else {
 			reg_val = I40E_READ_REG(hw, I40E_PRTMAC_MACC);
 
 			if (reg_val & I40E_REG_MACC_25GB)
-				link->link_speed = ETH_SPEED_NUM_25G;
+				link->link_speed = RTE_ETH_SPEED_NUM_25G;
 			else
-				link->link_speed = ETH_SPEED_NUM_40G;
+				link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		}
 		break;
 	case I40E_REG_SPEED_4:
 		if (hw->mac.type == I40E_MAC_X722)
-			link->link_speed = ETH_SPEED_NUM_10G;
+			link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		else
-			link->link_speed = ETH_SPEED_NUM_20G;
+			link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "Unknown link speed info %u", link_speed);
@@ -2964,8 +2964,8 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 		status = i40e_aq_get_link_info(hw, enable_lse,
 						&link_status, NULL);
 		if (unlikely(status != I40E_SUCCESS)) {
-			link->link_speed = ETH_SPEED_NUM_NONE;
-			link->link_duplex = ETH_LINK_FULL_DUPLEX;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
+			link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			return;
 		}
@@ -2980,28 +2980,28 @@ update_link_aq(struct i40e_hw *hw, struct rte_eth_link *link,
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case I40E_LINK_SPEED_100MB:
-		link->link_speed = ETH_SPEED_NUM_100M;
+		link->link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_LINK_SPEED_1GB:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_LINK_SPEED_20GB:
-		link->link_speed = ETH_SPEED_NUM_20G;
+		link->link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case I40E_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case I40E_LINK_SPEED_40GB:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	default:
 		if (link->link_status)
-			link->link_speed = ETH_SPEED_NUM_UNKNOWN;
+			link->link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		else
-			link->link_speed = ETH_SPEED_NUM_NONE;
+			link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 }
@@ -3018,9 +3018,9 @@ i40e_dev_link_update(struct rte_eth_dev *dev,
 	memset(&link, 0, sizeof(link));
 
 	/* i40e uses full duplex only */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	if (!wait_to_complete && !enable_lse)
 		update_link_reg(hw, &link);
@@ -3748,34 +3748,34 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_RSS_HASH;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 	dev_info->dev_capa =
 		RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
@@ -3834,7 +3834,7 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	if (I40E_PHY_TYPE_SUPPORT_40G(hw->phy.phy_types)) {
 		/* For XL710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_40G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_40G;
 		dev_info->default_rxportconf.nb_queues = 2;
 		dev_info->default_txportconf.nb_queues = 2;
 		if (dev->data->nb_rx_queues == 1)
@@ -3848,17 +3848,17 @@ i40e_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	} else if (I40E_PHY_TYPE_SUPPORT_25G(hw->phy.phy_types)) {
 		/* For XXV710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_25G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
 		dev_info->default_rxportconf.ring_size = 256;
 		dev_info->default_txportconf.ring_size = 256;
 	} else {
 		/* For X710 */
-		dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 		dev_info->default_rxportconf.nb_queues = 1;
 		dev_info->default_txportconf.nb_queues = 1;
-		if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_10G) {
+		if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_10G) {
 			dev_info->default_rxportconf.ring_size = 512;
 			dev_info->default_txportconf.ring_size = 256;
 		} else {
@@ -3897,7 +3897,7 @@ i40e_vlan_tpid_set_by_registers(struct rte_eth_dev *dev,
 	int ret;
 
 	if (qinq) {
-		if (vlan_type == ETH_VLAN_TYPE_OUTER)
+		if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 			reg_id = 2;
 	}
 
@@ -3944,12 +3944,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	struct i40e_pf *pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		   DEV_RX_OFFLOAD_VLAN_EXTEND;
+		   RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	int ret = 0;
 
-	if ((vlan_type != ETH_VLAN_TYPE_INNER &&
-	     vlan_type != ETH_VLAN_TYPE_OUTER) ||
-	    (!qinq && vlan_type == ETH_VLAN_TYPE_INNER)) {
+	if ((vlan_type != RTE_ETH_VLAN_TYPE_INNER &&
+	     vlan_type != RTE_ETH_VLAN_TYPE_OUTER) ||
+	    (!qinq && vlan_type == RTE_ETH_VLAN_TYPE_INNER)) {
 		PMD_DRV_LOG(ERR,
 			    "Unsupported vlan type.");
 		return -EINVAL;
@@ -3963,12 +3963,12 @@ i40e_vlan_tpid_set(struct rte_eth_dev *dev,
 	/* 802.1ad frames ability is added in NVM API 1.7*/
 	if (hw->flags & I40E_HW_FLAG_802_1AD_CAPABLE) {
 		if (qinq) {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->first_tag = rte_cpu_to_le_16(tpid);
-			else if (vlan_type == ETH_VLAN_TYPE_INNER)
+			else if (vlan_type == RTE_ETH_VLAN_TYPE_INNER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		} else {
-			if (vlan_type == ETH_VLAN_TYPE_OUTER)
+			if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER)
 				hw->second_tag = rte_cpu_to_le_16(tpid);
 		}
 		ret = i40e_aq_set_switch_config(hw, 0, 0, 0, NULL);
@@ -4027,37 +4027,37 @@ i40e_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_vlan_stripping(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND) {
 			i40e_vsi_config_double_vlan(vsi, TRUE);
 			/* Set global registers with default ethertype. */
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_OUTER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					   RTE_ETHER_TYPE_VLAN);
-			i40e_vlan_tpid_set(dev, ETH_VLAN_TYPE_INNER,
+			i40e_vlan_tpid_set(dev, RTE_ETH_VLAN_TYPE_INNER,
 					   RTE_ETHER_TYPE_VLAN);
 		}
 		else
 			i40e_vsi_config_double_vlan(vsi, FALSE);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
 		/* Enable or disable outer VLAN stripping */
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			i40e_vsi_config_outer_vlan_stripping(vsi, TRUE);
 		else
 			i40e_vsi_config_outer_vlan_stripping(vsi, FALSE);
@@ -4140,17 +4140,17 @@ i40e_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	 /* Return current mode according to actual setting*/
 	switch (hw->fc.current_mode) {
 	case I40E_FC_FULL:
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	case I40E_FC_TX_PAUSE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case I40E_FC_RX_PAUSE:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case I40E_FC_NONE:
 	default:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	};
 
 	return 0;
@@ -4166,10 +4166,10 @@ i40e_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	struct i40e_hw *hw;
 	struct i40e_pf *pf;
 	enum i40e_fc_mode rte_fcmode_2_i40e_fcmode[] = {
-		[RTE_FC_NONE] = I40E_FC_NONE,
-		[RTE_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
-		[RTE_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
-		[RTE_FC_FULL] = I40E_FC_FULL
+		[RTE_ETH_FC_NONE] = I40E_FC_NONE,
+		[RTE_ETH_FC_RX_PAUSE] = I40E_FC_RX_PAUSE,
+		[RTE_ETH_FC_TX_PAUSE] = I40E_FC_TX_PAUSE,
+		[RTE_ETH_FC_FULL] = I40E_FC_FULL
 	};
 
 	/* high_water field in the rte_eth_fc_conf using the kilobytes unit */
@@ -4316,7 +4316,7 @@ i40e_macaddr_add(struct rte_eth_dev *dev,
 	}
 
 	rte_memcpy(&mac_filter.mac_addr, mac_addr, RTE_ETHER_ADDR_LEN);
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		mac_filter.filter_type = I40E_MACVLAN_PERFECT_MATCH;
 	else
 		mac_filter.filter_type = I40E_MAC_PERFECT_MATCH;
@@ -4469,7 +4469,7 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4485,8 +4485,8 @@ i40e_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4512,7 +4512,7 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	int ret;
 
 	if (reta_size != lut_size ||
-		reta_size > ETH_RSS_RETA_SIZE_512) {
+		reta_size > RTE_ETH_RSS_RETA_SIZE_512) {
 		PMD_DRV_LOG(ERR,
 			"The size of hash lookup table configured (%d) doesn't match the number hardware can supported (%d)",
 			reta_size, lut_size);
@@ -4529,8 +4529,8 @@ i40e_dev_rss_reta_query(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -4847,7 +4847,7 @@ i40e_pf_parameter_init(struct rte_eth_dev *dev)
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
 				hw->func_caps.num_vsis - vsi_count);
 			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
-				ETH_64_POOLS);
+				RTE_ETH_64_POOLS);
 			if (pf->max_nb_vmdq_vsi) {
 				pf->flags |= I40E_FLAG_VMDQ;
 				pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
@@ -6132,10 +6132,10 @@ i40e_dev_init_vlan(struct rte_eth_dev *dev)
 	int mask = 0;
 
 	/* Apply vlan offload setting */
-	mask = ETH_VLAN_STRIP_MASK |
-	       ETH_QINQ_STRIP_MASK |
-	       ETH_VLAN_FILTER_MASK |
-	       ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK |
+	       RTE_ETH_QINQ_STRIP_MASK |
+	       RTE_ETH_VLAN_FILTER_MASK |
+	       RTE_ETH_VLAN_EXTEND_MASK;
 	ret = i40e_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_DRV_LOG(INFO, "Failed to update vlan offload");
@@ -6262,9 +6262,9 @@ i40e_pf_setup(struct i40e_pf *pf)
 
 	/* Configure filter control */
 	memset(&settings, 0, sizeof(settings));
-	if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_128)
+	if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_128)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_128;
-	else if (hw->func_caps.rss_table_size == ETH_RSS_RETA_SIZE_512)
+	else if (hw->func_caps.rss_table_size == RTE_ETH_RSS_RETA_SIZE_512)
 		settings.hash_lut_size = I40E_HASH_LUT_SIZE_512;
 	else {
 		PMD_DRV_LOG(ERR, "Hash lookup table size (%u) not supported",
@@ -7117,7 +7117,7 @@ i40e_find_vlan_filter(struct i40e_vsi *vsi,
 {
 	uint32_t vid_idx, vid_bit;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return 0;
 
 	vid_idx = I40E_VFTA_IDX(vlan_id);
@@ -7152,7 +7152,7 @@ i40e_set_vlan_filter(struct i40e_vsi *vsi,
 	struct i40e_aqc_add_remove_vlan_element_data vlan_data = {0};
 	int ret;
 
-	if (vlan_id > ETH_VLAN_ID_MAX)
+	if (vlan_id > RTE_ETH_VLAN_ID_MAX)
 		return;
 
 	i40e_store_vlan_filter(vsi, vlan_id, on);
@@ -8730,16 +8730,16 @@ i40e_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_add_vxlan_port(pf, udp_tunnel->udp_port,
 					  I40E_AQC_TUNNEL_TYPE_VXLAN_GPE);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8765,12 +8765,12 @@ i40e_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		ret = i40e_del_vxlan_port(pf, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -1;
 		break;
@@ -8862,7 +8862,7 @@ int
 i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 {
 	struct i40e_hw *hw = &pf->adapter->hw;
-	uint8_t lut[ETH_RSS_RETA_SIZE_512];
+	uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 	uint32_t i;
 	int num;
 
@@ -8870,7 +8870,7 @@ i40e_pf_reset_rss_reta(struct i40e_pf *pf)
 	 * configured. It's necessary to calculate the actual PF
 	 * queues that are configured.
 	 */
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		num = i40e_pf_calc_configured_queues_num(pf);
 	else
 		num = pf->dev_data->nb_rx_queues;
@@ -8949,7 +8949,7 @@ i40e_pf_config_rss(struct i40e_pf *pf)
 	rss_hf = pf->dev_data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
 	mq_mode = pf->dev_data->dev_conf.rxmode.mq_mode;
 	if (!(rss_hf & pf->adapter->flow_types_mask) ||
-	    !(mq_mode & ETH_MQ_RX_RSS_FLAG))
+	    !(mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		return 0;
 
 	hw = I40E_PF_TO_HW(pf);
@@ -10412,8 +10412,8 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
 		return I40E_ERR_NO_MEMORY;
 	}
 	switch (mirror_conf->rule_type) {
-	case ETH_MIRROR_VLAN:
-		for (i = 0, j = 0; i < ETH_MIRROR_MAX_VLANS; i++) {
+	case RTE_ETH_MIRROR_VLAN:
+		for (i = 0, j = 0; i < RTE_ETH_MIRROR_MAX_VLANS; i++) {
 			if (mirror_conf->vlan.vlan_mask & (1ULL << i)) {
 				mirr_rule->entries[j] =
 					mirror_conf->vlan.vlan_id[i];
@@ -10427,8 +10427,8 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
 		}
 		mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_VLAN;
 		break;
-	case ETH_MIRROR_VIRTUAL_POOL_UP:
-	case ETH_MIRROR_VIRTUAL_POOL_DOWN:
+	case RTE_ETH_MIRROR_VIRTUAL_POOL_UP:
+	case RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN:
 		/* check if the specified pool bit is out of range */
 		if (mirror_conf->pool_mask > (uint64_t)(1ULL << (pf->vf_num + 1))) {
 			PMD_DRV_LOG(ERR, "pool mask is out of range.");
@@ -10453,15 +10453,15 @@ i40e_mirror_rule_set(struct rte_eth_dev *dev,
 		}
 		/* egress and ingress in aq commands means from switch but not port */
 		mirr_rule->rule_type =
-			(mirror_conf->rule_type == ETH_MIRROR_VIRTUAL_POOL_UP) ?
+			(mirror_conf->rule_type == RTE_ETH_MIRROR_VIRTUAL_POOL_UP) ?
 			I40E_AQC_MIRROR_RULE_TYPE_VPORT_EGRESS :
 			I40E_AQC_MIRROR_RULE_TYPE_VPORT_INGRESS;
 		break;
-	case ETH_MIRROR_UPLINK_PORT:
+	case RTE_ETH_MIRROR_UPLINK_PORT:
 		/* egress and ingress in aq commands means from switch but not port*/
 		mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_ALL_EGRESS;
 		break;
-	case ETH_MIRROR_DOWNLINK_PORT:
+	case RTE_ETH_MIRROR_DOWNLINK_PORT:
 		mirr_rule->rule_type = I40E_AQC_MIRROR_RULE_TYPE_ALL_INGRESS;
 		break;
 	default:
@@ -10603,16 +10603,16 @@ i40e_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_40G:
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_25G:
 		tsync_inc_l = I40E_PTP_40GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_40GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		tsync_inc_l = I40E_PTP_10GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_10GB_INCVAL >> 32;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		tsync_inc_l = I40E_PTP_1GB_INCVAL & 0xFFFFFFFF;
 		tsync_inc_h = I40E_PTP_1GB_INCVAL >> 32;
 		break;
@@ -10840,7 +10840,7 @@ i40e_parse_dcb_configure(struct rte_eth_dev *dev,
 	else
 		*tc_map = RTE_LEN2MASK(dcb_rx_conf->nb_tcs, uint8_t);
 
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		dcb_cfg->pfc.willing = 0;
 		dcb_cfg->pfc.pfccap = I40E_MAX_TRAFFIC_CLASS;
 		dcb_cfg->pfc.pfcenable = *tc_map;
@@ -11348,7 +11348,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint16_t bsf, tc_mapping;
 	int i, j = 0;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = rte_bsf32(vsi->enabled_tc + 1);
 	else
 		dcb_info->nb_tcs = 1;
@@ -11396,7 +11396,7 @@ i40e_dev_get_dcb_info(struct rte_eth_dev *dev,
 				dcb_info->tc_queue.tc_rxq[j][i].nb_queue;
 		}
 		j++;
-	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, ETH_MAX_VMDQ_POOL));
+	} while (j < RTE_MIN(pf->nb_cfg_vmdq_vsi, RTE_ETH_MAX_VMDQ_POOL));
 	return 0;
 }
 
@@ -11774,10 +11774,10 @@ i40e_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > I40E_ETH_MAX_LEN)
 		dev_data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev_data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h
index cd6deabd60b3..f21c2de6bdb9 100644
--- a/drivers/net/i40e/i40e_ethdev.h
+++ b/drivers/net/i40e/i40e_ethdev.h
@@ -139,17 +139,17 @@ enum i40e_flxpld_layer_idx {
 		       I40E_FLAG_RSS_AQ_CAPABLE)
 
 #define I40E_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /* All bits of RSS hash enable for X722*/
 #define I40E_RSS_HENA_ALL_X722 ( \
@@ -1076,7 +1076,7 @@ struct i40e_rte_flow_rss_conf {
 	uint8_t key[(I40E_VFQF_HKEY_MAX_INDEX > I40E_PFQF_HKEY_MAX_INDEX ?
 		     I40E_VFQF_HKEY_MAX_INDEX : I40E_PFQF_HKEY_MAX_INDEX + 1) *
 		    sizeof(uint32_t)];		/**< Hash key. */
-	uint16_t queue[ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
+	uint16_t queue[RTE_ETH_RSS_RETA_SIZE_512];	/**< Queues indices to use. */
 
 	bool symmetric_enable;		/**< true, if enable symmetric */
 	uint64_t config_pctypes;	/**< All PCTYPES with the flow  */
diff --git a/drivers/net/i40e/i40e_ethdev_vf.c b/drivers/net/i40e/i40e_ethdev_vf.c
index 0cfe13b7b227..cda426fe5614 100644
--- a/drivers/net/i40e/i40e_ethdev_vf.c
+++ b/drivers/net/i40e/i40e_ethdev_vf.c
@@ -1077,7 +1077,7 @@ i40evf_add_vlan(struct rte_eth_dev *dev, uint16_t vlanid)
 	 * VLAN_STRIP by default. So reconfigure the vlan_offload
 	 * as it was done by the app earlier.
 	 */
-	err = i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
+	err = i40evf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK);
 	if (err)
 		PMD_DRV_LOG(ERR, "fail to set vlan_strip");
 
@@ -1403,28 +1403,28 @@ i40evf_handle_pf_event(struct rte_eth_dev *dev, uint8_t *msg,
 				pf_msg->event_data.link_event_adv.link_status;
 
 			switch (pf_msg->event_data.link_event_adv.link_speed) {
-			case ETH_SPEED_NUM_100M:
+			case RTE_ETH_SPEED_NUM_100M:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_100MB;
 				break;
-			case ETH_SPEED_NUM_1G:
+			case RTE_ETH_SPEED_NUM_1G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_1GB;
 				break;
-			case ETH_SPEED_NUM_2_5G:
+			case RTE_ETH_SPEED_NUM_2_5G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_2_5GB;
 				break;
-			case ETH_SPEED_NUM_5G:
+			case RTE_ETH_SPEED_NUM_5G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_5GB;
 				break;
-			case ETH_SPEED_NUM_10G:
+			case RTE_ETH_SPEED_NUM_10G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_10GB;
 				break;
-			case ETH_SPEED_NUM_20G:
+			case RTE_ETH_SPEED_NUM_20G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_20GB;
 				break;
-			case ETH_SPEED_NUM_25G:
+			case RTE_ETH_SPEED_NUM_25G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_25GB;
 				break;
-			case ETH_SPEED_NUM_40G:
+			case RTE_ETH_SPEED_NUM_40G:
 				vf->link_speed = VIRTCHNL_LINK_SPEED_40GB;
 				break;
 			default:
@@ -1770,7 +1770,7 @@ static int
 i40evf_init_vlan(struct rte_eth_dev *dev)
 {
 	/* Apply vlan offload setting */
-	i40evf_vlan_offload_set(dev, ETH_VLAN_STRIP_MASK);
+	i40evf_vlan_offload_set(dev, RTE_ETH_VLAN_STRIP_MASK);
 
 	return 0;
 }
@@ -1785,9 +1785,9 @@ i40evf_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			i40evf_enable_vlan_strip(dev);
 		else
 			i40evf_disable_vlan_strip(dev);
@@ -1933,7 +1933,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
 	/**
 	 * Check if the jumbo frame and maximum packet length are set correctly
 	 */
-	if (dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
 		    rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -1954,7 +1954,7 @@ i40evf_rxq_init(struct rte_eth_dev *dev, struct i40e_rx_queue *rxq)
 		}
 	}
 
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    rxq->max_pkt_len > buf_size)
 		dev_data->scattered_rx = 1;
 
@@ -2290,35 +2290,35 @@ i40evf_dev_link_update(struct rte_eth_dev *dev,
 	/* Linux driver PF host */
 	switch (vf->link_speed) {
 	case I40E_LINK_SPEED_100MB:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case I40E_LINK_SPEED_1GB:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case I40E_LINK_SPEED_10GB:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case I40E_LINK_SPEED_20GB:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case I40E_LINK_SPEED_25GB:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case I40E_LINK_SPEED_40GB:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	default:
 		if (vf->link_up)
-			new_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		else
-			new_link.link_speed = ETH_SPEED_NUM_NONE;
+			new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 	/* full duplex only */
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = vf->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg =
-		!(dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED);
+		!(dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -2367,36 +2367,36 @@ i40evf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = dev_info->max_rx_pktlen - I40E_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
 	dev_info->flow_type_rss_offloads = vf->adapter->flow_types_mask;
 	dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	dev_info->tx_queue_offload_capa = 0;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -2596,10 +2596,10 @@ i40evf_dev_rss_reta_update(struct rte_eth_dev *dev,
 	uint16_t i, idx, shift;
 	int ret;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_64) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_64) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number of hardware can "
-			"support (%d)", reta_size, ETH_RSS_RETA_SIZE_64);
+			"support (%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_64);
 		return -EINVAL;
 	}
 
@@ -2612,8 +2612,8 @@ i40evf_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -2635,10 +2635,10 @@ i40evf_dev_rss_reta_query(struct rte_eth_dev *dev,
 	uint8_t *lut;
 	int ret;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_64) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_64) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number of hardware can "
-			"support (%d)", reta_size, ETH_RSS_RETA_SIZE_64);
+			"support (%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_64);
 		return -EINVAL;
 	}
 
@@ -2652,8 +2652,8 @@ i40evf_dev_rss_reta_query(struct rte_eth_dev *dev,
 	if (ret)
 		goto out;
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -2770,7 +2770,7 @@ i40evf_config_rss(struct i40e_vf *vf)
 	uint8_t *lut_info;
 	int ret;
 
-	if (vf->dev_data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (vf->dev_data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		i40evf_disable_rss(vf);
 		PMD_DRV_LOG(DEBUG, "RSS not configured");
 		return 0;
@@ -2887,10 +2887,10 @@ i40evf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > I40E_ETH_MAX_LEN)
 		dev_data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev_data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
 	return ret;
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 3c1570bd9c47..d1cb992be61d 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -2015,7 +2015,7 @@ i40e_get_outer_vlan(struct rte_eth_dev *dev)
 {
 	struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 	int qinq = dev->data->dev_conf.rxmode.offloads &
-		DEV_RX_OFFLOAD_VLAN_EXTEND;
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 	uint64_t reg_r = 0;
 	uint16_t reg_id;
 	uint16_t tpid;
diff --git a/drivers/net/i40e/i40e_hash.c b/drivers/net/i40e/i40e_hash.c
index 1fb8c9abfcc6..3755d4d3fe2a 100644
--- a/drivers/net/i40e/i40e_hash.c
+++ b/drivers/net/i40e/i40e_hash.c
@@ -102,47 +102,47 @@ struct i40e_hash_map_rss_inset {
 
 const struct i40e_hash_map_rss_inset i40e_hash_rss_inset[] = {
 	/* IPv4 */
-	{ ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
-	{ ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
+	{ RTE_ETH_RSS_FRAG_IPV4, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST },
 
-	{ ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV4_SCTP, I40E_INSET_IPV4_SRC | I40E_INSET_IPV4_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* IPv6 */
-	{ ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
-	{ ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
+	{ RTE_ETH_RSS_FRAG_IPV6, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_OTHER,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	  I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST },
 
-	{ ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
-	{ ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
+	{ RTE_ETH_RSS_NONFRAG_IPV6_SCTP, I40E_INSET_IPV6_SRC | I40E_INSET_IPV6_DST |
 	  I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT | I40E_INSET_SCTP_VT },
 
 	/* Port */
-	{ ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
+	{ RTE_ETH_RSS_PORT, I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT },
 
 	/* Ether */
-	{ ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
-	{ ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
+	{ RTE_ETH_RSS_L2_PAYLOAD, I40E_INSET_LAST_ETHER_TYPE },
+	{ RTE_ETH_RSS_ETH, I40E_INSET_DMAC | I40E_INSET_SMAC },
 
 	/* VLAN */
-	{ ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
-	{ ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
+	{ RTE_ETH_RSS_S_VLAN, I40E_INSET_VLAN_OUTER },
+	{ RTE_ETH_RSS_C_VLAN, I40E_INSET_VLAN_INNER },
 };
 
 #define I40E_HASH_VOID_NEXT_ALLOW	BIT_ULL(RTE_FLOW_ITEM_TYPE_ETH)
@@ -201,30 +201,30 @@ struct i40e_hash_match_pattern {
 #define I40E_HASH_MAP_CUS_PATTERN(pattern, rss_mask, cus_pctype) { \
 	pattern, rss_mask, true, cus_pctype }
 
-#define I40E_HASH_L2_RSS_MASK		(ETH_RSS_VLAN | ETH_RSS_ETH | \
-					ETH_RSS_L2_SRC_ONLY | \
-					ETH_RSS_L2_DST_ONLY)
+#define I40E_HASH_L2_RSS_MASK		(RTE_ETH_RSS_VLAN | RTE_ETH_RSS_ETH | \
+					RTE_ETH_RSS_L2_SRC_ONLY | \
+					RTE_ETH_RSS_L2_DST_ONLY)
 
 #define I40E_HASH_L23_RSS_MASK		(I40E_HASH_L2_RSS_MASK | \
-					ETH_RSS_L3_SRC_ONLY | \
-					ETH_RSS_L3_DST_ONLY)
+					RTE_ETH_RSS_L3_SRC_ONLY | \
+					RTE_ETH_RSS_L3_DST_ONLY)
 
-#define I40E_HASH_IPV4_L23_RSS_MASK	(ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
-#define I40E_HASH_IPV6_L23_RSS_MASK	(ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV4_L23_RSS_MASK	(RTE_ETH_RSS_IPV4 | I40E_HASH_L23_RSS_MASK)
+#define I40E_HASH_IPV6_L23_RSS_MASK	(RTE_ETH_RSS_IPV6 | I40E_HASH_L23_RSS_MASK)
 
 #define I40E_HASH_L234_RSS_MASK		(I40E_HASH_L23_RSS_MASK | \
-					ETH_RSS_PORT | ETH_RSS_L4_SRC_ONLY | \
-					ETH_RSS_L4_DST_ONLY)
+					RTE_ETH_RSS_PORT | RTE_ETH_RSS_L4_SRC_ONLY | \
+					RTE_ETH_RSS_L4_DST_ONLY)
 
-#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV4)
-#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | ETH_RSS_IPV6)
+#define I40E_HASH_IPV4_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV4)
+#define I40E_HASH_IPV6_L234_RSS_MASK	(I40E_HASH_L234_RSS_MASK | RTE_ETH_RSS_IPV6)
 
-#define I40E_HASH_L4_TYPES		(ETH_RSS_NONFRAG_IPV4_TCP | \
-					ETH_RSS_NONFRAG_IPV4_UDP | \
-					ETH_RSS_NONFRAG_IPV4_SCTP | \
-					ETH_RSS_NONFRAG_IPV6_TCP | \
-					ETH_RSS_NONFRAG_IPV6_UDP | \
-					ETH_RSS_NONFRAG_IPV6_SCTP)
+#define I40E_HASH_L4_TYPES		(RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+					RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* Current supported patterns and RSS types.
  * All items that have the same pattern types are together.
@@ -232,68 +232,68 @@ struct i40e_hash_match_pattern {
 static const struct i40e_hash_match_pattern match_patterns[] = {
 	/* Ether */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_ETH,
-			      ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
+			      RTE_ETH_RSS_L2_PAYLOAD | I40E_HASH_L2_RSS_MASK,
 			      I40E_FILTER_PCTYPE_L2_PAYLOAD),
 
 	/* IPv4 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV4 | I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV4),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4,
-			      ETH_RSS_NONFRAG_IPV4_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
 			      I40E_HASH_IPV4_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_TCP,
-			      ETH_RSS_NONFRAG_IPV4_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_TCP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_UDP,
-			      ETH_RSS_NONFRAG_IPV4_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_UDP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV4_SCTP,
-			      ETH_RSS_NONFRAG_IPV4_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
 			      I40E_HASH_IPV4_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV4_SCTP),
 
 	/* IPv6 */
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
+			      RTE_ETH_RSS_FRAG_IPV6 | I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_FRAG_IPV6),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6,
-			      ETH_RSS_NONFRAG_IPV6_OTHER |
+			      RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
 			      I40E_HASH_IPV6_L23_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_OTHER),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_TCP,
-			      ETH_RSS_NONFRAG_IPV6_TCP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_TCP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_TCP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_UDP,
-			      ETH_RSS_NONFRAG_IPV6_UDP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_UDP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_UDP),
 
 	I40E_HASH_MAP_PATTERN(I40E_PHINT_IPV6_SCTP,
-			      ETH_RSS_NONFRAG_IPV6_SCTP |
+			      RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
 			      I40E_HASH_IPV6_L234_RSS_MASK,
 			      I40E_FILTER_PCTYPE_NONF_IPV6_SCTP),
 
 	/* ESP */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV4_UDP),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_UDP_ESP,
-				  ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
+				  RTE_ETH_RSS_ESP, I40E_CUSTOMIZED_ESP_IPV6_UDP),
 
 	/* GTPC */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPC,
@@ -308,27 +308,27 @@ static const struct i40e_hash_match_pattern match_patterns[] = {
 				  I40E_HASH_IPV4_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU,
 				  I40E_HASH_IPV6_L234_RSS_MASK,
 				  I40E_CUSTOMIZED_GTPU),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV4,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV4),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_GTPU_IPV6,
-				  ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
+				  RTE_ETH_RSS_GTPU, I40E_CUSTOMIZED_GTPU_IPV6),
 
 	/* L2TPV3 */
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV4_L2TPV3),
 	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_L2TPV3,
-				  ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
+				  RTE_ETH_RSS_L2TPV3, I40E_CUSTOMIZED_IPV6_L2TPV3),
 
 	/* AH */
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV4_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV4),
-	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, ETH_RSS_AH,
+	I40E_HASH_MAP_CUS_PATTERN(I40E_PHINT_IPV6_AH, RTE_ETH_RSS_AH,
 				  I40E_CUSTOMIZED_AH_IPV6),
 };
 
@@ -564,29 +564,29 @@ i40e_hash_get_inset(uint64_t rss_types)
 	/* If SRC_ONLY and DST_ONLY of the same level are used simultaneously,
 	 * it is the same case as none of them are added.
 	 */
-	mask = rss_types & (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY);
-	if (mask == ETH_RSS_L2_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L2_SRC_ONLY)
 		inset &= ~I40E_INSET_DMAC;
-	else if (mask == ETH_RSS_L2_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L2_DST_ONLY)
 		inset &= ~I40E_INSET_SMAC;
 
-	mask = rss_types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
-	if (mask == ETH_RSS_L3_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L3_SRC_ONLY)
 		inset &= ~(I40E_INSET_IPV4_DST | I40E_INSET_IPV6_DST);
-	else if (mask == ETH_RSS_L3_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L3_DST_ONLY)
 		inset &= ~(I40E_INSET_IPV4_SRC | I40E_INSET_IPV6_SRC);
 
-	mask = rss_types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
-	if (mask == ETH_RSS_L4_SRC_ONLY)
+	mask = rss_types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
+	if (mask == RTE_ETH_RSS_L4_SRC_ONLY)
 		inset &= ~I40E_INSET_DST_PORT;
-	else if (mask == ETH_RSS_L4_DST_ONLY)
+	else if (mask == RTE_ETH_RSS_L4_DST_ONLY)
 		inset &= ~I40E_INSET_SRC_PORT;
 
 	if (rss_types & I40E_HASH_L4_TYPES) {
 		uint64_t l3_mask = rss_types &
-				   (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+				   (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 		uint64_t l4_mask = rss_types &
-				   (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+				   (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 		if (l3_mask && !l4_mask)
 			inset &= ~(I40E_INSET_SRC_PORT | I40E_INSET_DST_PORT);
@@ -825,7 +825,7 @@ i40e_hash_config(struct i40e_pf *pf,
 
 	/* Update lookup table */
 	if (rss_info->queue_num > 0) {
-		uint8_t lut[ETH_RSS_RETA_SIZE_512];
+		uint8_t lut[RTE_ETH_RSS_RETA_SIZE_512];
 		uint32_t i, j = 0;
 
 		for (i = 0; i < hw->func_caps.rss_table_size; i++) {
@@ -932,7 +932,7 @@ i40e_hash_parse_queues(const struct rte_eth_dev *dev,
 			    "RSS key is ignored when queues specified");
 
 	pf = I40E_DEV_PRIVATE_TO_PF(dev->data->dev_private);
-	if (pf->dev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_VMDQ_FLAG)
+	if (pf->dev_data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_VMDQ_FLAG)
 		max_queue = i40e_pf_calc_configured_queues_num(pf);
 	else
 		max_queue = pf->dev_data->nb_rx_queues;
@@ -1070,22 +1070,22 @@ i40e_hash_validate_rss_types(uint64_t rss_types)
 	uint64_t type, mask;
 
 	/* Validate L2 */
-	type = ETH_RSS_ETH & rss_types;
-	mask = (ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY) & rss_types;
+	type = RTE_ETH_RSS_ETH & rss_types;
+	mask = (RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L3 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-	       ETH_RSS_NONFRAG_IPV4_OTHER | ETH_RSS_IPV6 |
-	       ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
-	mask = (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+	       RTE_ETH_RSS_NONFRAG_IPV4_OTHER | RTE_ETH_RSS_IPV6 |
+	       RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER) & rss_types;
+	mask = (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
 	/* Validate L4 */
-	type = (I40E_HASH_L4_TYPES | ETH_RSS_PORT) & rss_types;
-	mask = (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY) & rss_types;
+	type = (I40E_HASH_L4_TYPES | RTE_ETH_RSS_PORT) & rss_types;
+	mask = (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY) & rss_types;
 	if (!type && mask)
 		return false;
 
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index e2d8b2b5f7f1..ccb3924a5f68 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1207,24 +1207,24 @@ i40e_notify_vf_link_status(struct rte_eth_dev *dev, struct i40e_pf_vf *vf)
 	event.event_data.link_event.link_status =
 		dev->data->dev_link.link_status;
 
-	/* need to convert the ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
+	/* need to convert the RTE_ETH_SPEED_xxx into VIRTCHNL_LINK_SPEED_xxx */
 	switch (dev->data->dev_link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_100MB;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_10GB;
 		break;
-	case ETH_SPEED_NUM_20G:
+	case RTE_ETH_SPEED_NUM_20G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_20GB;
 		break;
-	case ETH_SPEED_NUM_25G:
+	case RTE_ETH_SPEED_NUM_25G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_25GB;
 		break;
-	case ETH_SPEED_NUM_40G:
+	case RTE_ETH_SPEED_NUM_40G:
 		event.event_data.link_event.link_speed = VIRTCHNL_LINK_SPEED_40GB;
 		break;
 	default:
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 8329cbdd4e30..3bad4052ed1b 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1329,7 +1329,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	for (i = 0; i < tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		if (k) {
 			for (j = 0; j != k; j += RTE_I40E_TX_MAX_FREE_BUF_SZ) {
 				for (i = 0; i < RTE_I40E_TX_MAX_FREE_BUF_SZ; ++i, ++txep) {
@@ -2005,7 +2005,7 @@ i40e_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->queue_id = queue_idx;
 	rxq->reg_idx = reg_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2265,7 +2265,7 @@ i40e_dev_tx_queue_setup_runtime(struct rte_eth_dev *dev,
 	}
 	/* check simple tx conflict */
 	if (ad->tx_simple_allowed) {
-		if ((txq->offloads & ~DEV_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
+		if ((txq->offloads & ~RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) != 0 ||
 				txq->tx_rs_thresh < RTE_PMD_I40E_TX_MAX_BURST) {
 			PMD_DRV_LOG(ERR, "No-simple tx is required.");
 			return -EINVAL;
@@ -2925,7 +2925,7 @@ i40e_rx_queue_config(struct i40e_rx_queue *rxq)
 	rxq->max_pkt_len =
 		RTE_MIN((uint32_t)(hw->func_caps.rx_buf_chain_len *
 			rxq->rx_buf_len), data->dev_conf.rxmode.max_rx_pkt_len);
-	if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (rxq->max_pkt_len <= I40E_ETH_MAX_LEN ||
 			rxq->max_pkt_len > I40E_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must "
@@ -3441,7 +3441,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		 (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		 (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		 txq->tx_rs_thresh >= RTE_PMD_I40E_TX_MAX_BURST);
 	ad->tx_vec_allowed = (ad->tx_simple_allowed &&
 			txq->tx_rs_thresh <= RTE_I40E_TX_MAX_FREE_BUF_SZ);
diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h
index 5ccf5773e857..303a4db47dbd 100644
--- a/drivers/net/i40e/i40e_rxtx.h
+++ b/drivers/net/i40e/i40e_rxtx.h
@@ -120,7 +120,7 @@ struct i40e_rx_queue {
 	bool rx_deferred_start; /**< don't start this queue in dev start */
 	uint16_t rx_using_sse; /**<flag indicate the usage of vPMD for rx */
 	uint8_t dcb_tc;         /**< Traffic class of rx queue */
-	uint64_t offloads; /**< Rx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Rx offload flags of RTE_ETH_RX_OFFLOAD_* */
 };
 
 struct i40e_tx_entry {
@@ -165,7 +165,7 @@ struct i40e_tx_queue {
 	bool q_set; /**< indicate if tx queue has been configured */
 	bool tx_deferred_start; /**< don't start this queue in dev start */
 	uint8_t dcb_tc;         /**< Traffic class of tx queue */
-	uint64_t offloads; /**< Tx offload flags of DEV_RX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /** Offload features */
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index bd21d6422394..5f00d43950aa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -899,7 +899,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h
index f52ed98d62d0..0192164c35fa 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_common.h
+++ b/drivers/net/i40e/i40e_rxtx_vec_common.h
@@ -100,7 +100,7 @@ i40e_tx_free_bufs(struct i40e_tx_queue *txq)
 	  */
 	txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)];
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < n; i++) {
 			free[i] = txep[i].mbuf;
 			txep[i].mbuf = NULL;
@@ -211,7 +211,7 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 	struct i40e_adapter *ad =
 		I40E_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
 	struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 	struct i40e_rx_queue *rxq;
 	uint16_t desc, i;
 	bool first_queue;
@@ -221,11 +221,11 @@ i40e_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 		return -1;
 
 	 /* no header split support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_HEADER_SPLIT)
 		return -1;
 
 	/* no QinQ support */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 		return -1;
 
 	/**
diff --git a/drivers/net/i40e/i40e_vf_representor.c b/drivers/net/i40e/i40e_vf_representor.c
index 0481b5538132..6d90b0f3511b 100644
--- a/drivers/net/i40e/i40e_vf_representor.c
+++ b/drivers/net/i40e/i40e_vf_representor.c
@@ -42,30 +42,30 @@ i40e_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_rx_pktlen = I40E_FRAME_SIZE_MAX;
 	dev_info->hash_key_size = (I40E_VFQF_HKEY_MAX_INDEX + 1) *
 		sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_64;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_64;
 	dev_info->flow_type_rss_offloads = I40E_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = I40E_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_MULTI_SEGS  |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -385,19 +385,19 @@ i40e_vf_representor_vlan_offload_set(struct rte_eth_dev *ethdev, int mask)
 		return -EINVAL;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* Enable or disable VLAN filtering offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			return i40e_vsi_config_vlan_filter(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_filter(vsi, FALSE);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping offload */
 		if (ethdev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_STRIP)
+		    RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			return i40e_vsi_config_vlan_stripping(vsi, TRUE);
 		else
 			return i40e_vsi_config_vlan_stripping(vsi, FALSE);
diff --git a/drivers/net/iavf/iavf.h b/drivers/net/iavf/iavf.h
index b3bd07811198..1d4383e89327 100644
--- a/drivers/net/iavf/iavf.h
+++ b/drivers/net/iavf/iavf.h
@@ -48,18 +48,18 @@
 	VIRTCHNL_VF_OFFLOAD_RX_POLLING)
 
 #define IAVF_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 |         \
-	ETH_RSS_NONFRAG_IPV4_TCP |  \
-	ETH_RSS_NONFRAG_IPV4_UDP |  \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 |         \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP |  \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 #define IAVF_MISC_VEC_ID                RTE_INTR_VEC_ZERO_OFFSET
 #define IAVF_RX_VEC_START               RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 574cfe055e7c..fc0087968b78 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -265,53 +265,53 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	static const uint64_t map_hena_rss[] = {
 		/* IPv4 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_UDP] =
-				ETH_RSS_NONFRAG_IPV4_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_TCP] =
-				ETH_RSS_NONFRAG_IPV4_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_SCTP] =
-				ETH_RSS_NONFRAG_IPV4_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV4_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV4_OTHER] =
-				ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = ETH_RSS_FRAG_IPV4,
+				RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV4] = RTE_ETH_RSS_FRAG_IPV4,
 
 		/* IPv6 */
 		[IAVF_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_UDP] =
-				ETH_RSS_NONFRAG_IPV6_UDP,
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_TCP] =
-				ETH_RSS_NONFRAG_IPV6_TCP,
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_SCTP] =
-				ETH_RSS_NONFRAG_IPV6_SCTP,
+				RTE_ETH_RSS_NONFRAG_IPV6_SCTP,
 		[IAVF_FILTER_PCTYPE_NONF_IPV6_OTHER] =
-				ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = ETH_RSS_FRAG_IPV6,
+				RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IAVF_FILTER_PCTYPE_FRAG_IPV6] = RTE_ETH_RSS_FRAG_IPV6,
 
 		/* L2 Payload */
-		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = ETH_RSS_L2_PAYLOAD
+		[IAVF_FILTER_PCTYPE_L2_PAYLOAD] = RTE_ETH_RSS_L2_PAYLOAD
 	};
 
-	const uint64_t ipv4_rss = ETH_RSS_NONFRAG_IPV4_UDP |
-				  ETH_RSS_NONFRAG_IPV4_TCP |
-				  ETH_RSS_NONFRAG_IPV4_SCTP |
-				  ETH_RSS_NONFRAG_IPV4_OTHER |
-				  ETH_RSS_FRAG_IPV4;
+	const uint64_t ipv4_rss = RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV4_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV4;
 
-	const uint64_t ipv6_rss = ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_NONFRAG_IPV6_SCTP |
-				  ETH_RSS_NONFRAG_IPV6_OTHER |
-				  ETH_RSS_FRAG_IPV6;
+	const uint64_t ipv6_rss = RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_SCTP |
+				  RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+				  RTE_ETH_RSS_FRAG_IPV6;
 
 	struct iavf_info *vf =  IAVF_DEV_PRIVATE_TO_VF(adapter);
 	uint64_t caps = 0, hena = 0, valid_rss_hf = 0;
@@ -330,13 +330,13 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	/**
-	 * ETH_RSS_IPV4 and ETH_RSS_IPV6 can be considered as 2
+	 * RTE_ETH_RSS_IPV4 and RTE_ETH_RSS_IPV6 can be considered as 2
 	 * generalizations of all other IPv4 and IPv6 RSS types.
 	 */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		rss_hf |= ipv4_rss;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		rss_hf |= ipv6_rss;
 
 	RTE_BUILD_BUG_ON(RTE_DIM(map_hena_rss) > sizeof(uint64_t) * CHAR_BIT);
@@ -362,10 +362,10 @@ iavf_config_rss_hf(struct iavf_adapter *adapter, uint64_t rss_hf)
 	}
 
 	if (valid_rss_hf & ipv4_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV4;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV4;
 
 	if (valid_rss_hf & ipv6_rss)
-		valid_rss_hf |= rss_hf & ETH_RSS_IPV6;
+		valid_rss_hf |= rss_hf & RTE_ETH_RSS_IPV6;
 
 	if (rss_hf & ~valid_rss_hf)
 		PMD_DRV_LOG(WARNING, "Unsupported rss_hf 0x%" PRIx64,
@@ -466,7 +466,7 @@ iavf_dev_vlan_insert_set(struct rte_eth_dev *dev)
 		return 0;
 
 	enable = !!(dev->data->dev_conf.txmode.offloads &
-		    DEV_TX_OFFLOAD_VLAN_INSERT);
+		    RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	iavf_config_vlan_insert_v2(adapter, enable);
 
 	return 0;
@@ -478,10 +478,10 @@ iavf_dev_init_vlan(struct rte_eth_dev *dev)
 	int err;
 
 	err = iavf_dev_vlan_offload_set(dev,
-					ETH_VLAN_STRIP_MASK |
-					ETH_QINQ_STRIP_MASK |
-					ETH_VLAN_FILTER_MASK |
-					ETH_VLAN_EXTEND_MASK);
+					RTE_ETH_VLAN_STRIP_MASK |
+					RTE_ETH_QINQ_STRIP_MASK |
+					RTE_ETH_VLAN_FILTER_MASK |
+					RTE_ETH_VLAN_EXTEND_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to update vlan offload");
 		return err;
@@ -511,8 +511,8 @@ iavf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_vec_allowed = true;
 	ad->tx_vec_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Large VF setting */
 	if (num_queue_pairs > IAVF_MAX_NUM_QUEUES_DFLT) {
@@ -585,7 +585,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
 	/* Check if the jumbo frame and maximum packet length are set
 	 * correctly.
 	 */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (max_pkt_len <= IAVF_ETH_MAX_LEN ||
 		    max_pkt_len > IAVF_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -608,7 +608,7 @@ iavf_init_rxq(struct rte_eth_dev *dev, struct iavf_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    rxq->max_pkt_len > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -943,35 +943,35 @@ iavf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->flow_type_rss_offloads = IAVF_RSS_OFFLOAD_ALL;
 	dev_info->max_mac_addrs = IAVF_NUM_MACADDR_MAX;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_CRC)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_KEEP_CRC;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_free_thresh = IAVF_DEFAULT_RX_FREE_THRESH,
@@ -1031,42 +1031,42 @@ iavf_dev_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (vf->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = vf->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = vf->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -1214,14 +1214,14 @@ iavf_dev_vlan_offload_set_v2(struct rte_eth_dev *dev, int mask)
 	bool enable;
 	int err;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 
 		iavf_iterate_vlan_filters_v2(dev, enable);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		enable = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		enable = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		err = iavf_config_vlan_strip_v2(adapter, enable);
 		/* If not support, the stripping is already disabled by PF */
@@ -1250,9 +1250,9 @@ iavf_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		/* Enable or disable VLAN stripping */
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			err = iavf_enable_vlan_strip(adapter);
 		else
 			err = iavf_disable_vlan_strip(adapter);
@@ -1294,8 +1294,8 @@ iavf_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(lut, vf->rss_lut, reta_size);
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -1331,8 +1331,8 @@ iavf_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = vf->rss_lut[i];
 	}
@@ -1457,10 +1457,10 @@ iavf_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > IAVF_ETH_MAX_LEN)
 		dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_JUMBO_FRAME;
+				RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+				~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
@@ -1564,7 +1564,7 @@ iavf_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
 	ret = iavf_query_stats(adapter, &pstats);
 	if (ret == 0) {
 		uint8_t crc_stats_len = (dev->data->dev_conf.rxmode.offloads &
-					 DEV_RX_OFFLOAD_KEEP_CRC) ? 0 :
+					 RTE_ETH_RX_OFFLOAD_KEEP_CRC) ? 0 :
 					 RTE_ETHER_CRC_LEN;
 		iavf_update_stats(vsi, pstats);
 		stats->ipackets = pstats->rx_unicast + pstats->rx_multicast +
diff --git a/drivers/net/iavf/iavf_hash.c b/drivers/net/iavf/iavf_hash.c
index 2b03dad8589c..1329a389f742 100644
--- a/drivers/net/iavf/iavf_hash.c
+++ b/drivers/net/iavf/iavf_hash.c
@@ -341,83 +341,83 @@ struct virtchnl_proto_hdrs ipv4_ecpri_tmplt = {
 /* rss type super set */
 
 /* IPv4 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV4	(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4)
+#define IAVF_RSS_TYPE_OUTER_IPV4	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define IAVF_RSS_TYPE_OUTER_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 #define IAVF_RSS_TYPE_OUTER_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 #define IAVF_RSS_TYPE_OUTER_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 /* IPv6 outer */
-#define IAVF_RSS_TYPE_OUTER_IPV6	(ETH_RSS_ETH | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_OUTER_IPV6	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define IAVF_RSS_TYPE_OUTER_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 #define IAVF_RSS_TYPE_OUTER_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 #define IAVF_RSS_TYPE_OUTER_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 /* VLAN IPV4 */
 #define IAVF_RSS_TYPE_VLAN_IPV4		(IAVF_RSS_TYPE_OUTER_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_UDP	(IAVF_RSS_TYPE_OUTER_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_TCP	(IAVF_RSS_TYPE_OUTER_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV4_SCTP	(IAVF_RSS_TYPE_OUTER_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define IAVF_RSS_TYPE_VLAN_IPV6		(IAVF_RSS_TYPE_OUTER_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_FRAG	(IAVF_RSS_TYPE_OUTER_IPV6_FRAG | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_UDP	(IAVF_RSS_TYPE_OUTER_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_TCP	(IAVF_RSS_TYPE_OUTER_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define IAVF_RSS_TYPE_VLAN_IPV6_SCTP	(IAVF_RSS_TYPE_OUTER_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* IPv4 inner */
-#define IAVF_RSS_TYPE_INNER_IPV4	ETH_RSS_IPV4
-#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV4	RTE_ETH_RSS_IPV4
+#define IAVF_RSS_TYPE_INNER_IPV4_UDP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV4_TCP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV4_SCTP	(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 /* IPv6 inner */
-#define IAVF_RSS_TYPE_INNER_IPV6	ETH_RSS_IPV6
-#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define IAVF_RSS_TYPE_INNER_IPV6	RTE_ETH_RSS_IPV6
+#define IAVF_RSS_TYPE_INNER_IPV6_UDP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define IAVF_RSS_TYPE_INNER_IPV6_TCP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define IAVF_RSS_TYPE_INNER_IPV6_SCTP	(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 /* GTPU IPv4 */
 #define IAVF_RSS_TYPE_GTPU_IPV4		(IAVF_RSS_TYPE_INNER_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_UDP	(IAVF_RSS_TYPE_INNER_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV4_TCP	(IAVF_RSS_TYPE_INNER_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define IAVF_RSS_TYPE_GTPU_IPV6		(IAVF_RSS_TYPE_INNER_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_UDP	(IAVF_RSS_TYPE_INNER_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define IAVF_RSS_TYPE_GTPU_IPV6_TCP	(IAVF_RSS_TYPE_INNER_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* ESP, AH, L2TPV3 and PFCP */
-#define IAVF_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define IAVF_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define IAVF_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define IAVF_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define IAVF_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /**
  * Supported pattern for hash.
@@ -435,7 +435,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv4_udp,		IAVF_RSS_TYPE_VLAN_IPV4_UDP,	&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_tcp,		IAVF_RSS_TYPE_VLAN_IPV4_TCP,	&outer_ipv4_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv4_sctp,		IAVF_RSS_TYPE_VLAN_IPV4_SCTP,	&outer_ipv4_sctp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpu,			ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
+	{iavf_pattern_eth_ipv4_gtpu,			RTE_ETH_RSS_IPV4,			&outer_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4,		IAVF_RSS_TYPE_GTPU_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_udp,		IAVF_RSS_TYPE_GTPU_IPV4_UDP,	&inner_ipv4_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv4_tcp,		IAVF_RSS_TYPE_GTPU_IPV4_TCP,	&inner_ipv4_tcp_tmplt},
@@ -477,9 +477,9 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv4_ah,			IAVF_RSS_TYPE_IPV4_AH,		&ipv4_ah_tmplt},
 	{iavf_pattern_eth_ipv4_l2tpv3,			IAVF_RSS_TYPE_IPV4_L2TPV3,	&ipv4_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv4_pfcp,			IAVF_RSS_TYPE_IPV4_PFCP,	&ipv4_pfcp_tmplt},
-	{iavf_pattern_eth_ipv4_gtpc,			ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
-	{iavf_pattern_eth_ecpri,			ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
-	{iavf_pattern_eth_ipv4_ecpri,			ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_gtpc,			RTE_ETH_RSS_IPV4,			&ipv4_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ecpri,			RTE_ETH_RSS_ECPRI,			&eth_ecpri_tmplt},
+	{iavf_pattern_eth_ipv4_ecpri,			RTE_ETH_RSS_ECPRI,			&ipv4_ecpri_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4,	&inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv4,		IAVF_RSS_TYPE_INNER_IPV4, &inner_ipv4_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv4_tcp,	IAVF_RSS_TYPE_INNER_IPV4_TCP, &inner_ipv4_tcp_tmplt},
@@ -497,7 +497,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_vlan_ipv6_udp,		IAVF_RSS_TYPE_VLAN_IPV6_UDP,	&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_tcp,		IAVF_RSS_TYPE_VLAN_IPV6_TCP,	&outer_ipv6_tcp_tmplt},
 	{iavf_pattern_eth_vlan_ipv6_sctp,		IAVF_RSS_TYPE_VLAN_IPV6_SCTP,	&outer_ipv6_sctp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpu,			ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
+	{iavf_pattern_eth_ipv6_gtpu,			RTE_ETH_RSS_IPV6,			&outer_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6,		IAVF_RSS_TYPE_GTPU_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_udp,		IAVF_RSS_TYPE_GTPU_IPV6_UDP,	&inner_ipv6_udp_tmplt},
 	{iavf_pattern_eth_ipv4_gtpu_ipv6_tcp,		IAVF_RSS_TYPE_GTPU_IPV6_TCP,	&inner_ipv6_tcp_tmplt},
@@ -539,7 +539,7 @@ static struct iavf_pattern_match_item iavf_hash_pattern_list[] = {
 	{iavf_pattern_eth_ipv6_ah,			IAVF_RSS_TYPE_IPV6_AH,		&ipv6_ah_tmplt},
 	{iavf_pattern_eth_ipv6_l2tpv3,			IAVF_RSS_TYPE_IPV6_L2TPV3,	&ipv6_l2tpv3_tmplt},
 	{iavf_pattern_eth_ipv6_pfcp,			IAVF_RSS_TYPE_IPV6_PFCP,	&ipv6_pfcp_tmplt},
-	{iavf_pattern_eth_ipv6_gtpc,			ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
+	{iavf_pattern_eth_ipv6_gtpc,			RTE_ETH_RSS_IPV6,			&ipv6_udp_gtpc_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6,	&inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv6_gre_ipv6,		IAVF_RSS_TYPE_INNER_IPV6, &inner_ipv6_tmplt},
 	{iavf_pattern_eth_ipv4_gre_ipv6_tcp,	IAVF_RSS_TYPE_INNER_IPV6_TCP, &inner_ipv6_tcp_tmplt},
@@ -573,57 +573,57 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
 	struct virtchnl_rss_cfg rss_cfg;
 
 #define IAVF_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 	rss_cfg.rss_algorithm = VIRTCHNL_RSS_ALG_TOEPLITZ_ASYMMETRIC;
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		rss_cfg.proto_hdrs = inner_ipv4_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv4_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv4_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv4_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		rss_cfg.proto_hdrs = inner_ipv6_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		rss_cfg.proto_hdrs = inner_ipv6_udp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		rss_cfg.proto_hdrs = inner_ipv6_tcp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		rss_cfg.proto_hdrs = inner_ipv6_sctp_tmplt;
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_FRAG_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_FRAG_IPV4) {
 		struct virtchnl_proto_hdrs hdr = {
 			.tunnel_level = TUNNEL_LEVEL_OUTER,
 			.count = 3,
@@ -641,7 +641,7 @@ iavf_rss_hash_set(struct iavf_adapter *ad, uint64_t rss_hf, bool add)
 		iavf_add_del_rss_cfg(ad, &rss_cfg, add);
 	}
 
-	if (rss_hf & ETH_RSS_FRAG_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_FRAG_IPV6) {
 		struct virtchnl_proto_hdrs hdr = {
 			.tunnel_level = TUNNEL_LEVEL_OUTER,
 			.count = 3,
@@ -804,28 +804,28 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 		hdr = &proto_hdrs->proto_hdr[i];
 		switch (hdr->type) {
 		case VIRTCHNL_PROTO_HDR_ETH:
-			if (!(rss_type & ETH_RSS_ETH))
+			if (!(rss_type & RTE_ETH_RSS_ETH))
 				hdr->field_selector = 0;
-			else if (rss_type & ETH_RSS_L2_SRC_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_DST);
-			else if (rss_type & ETH_RSS_L2_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 				REFINE_PROTO_FLD(DEL, ETH_SRC);
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4) {
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 					iavf_hash_add_fragment_hdr(proto_hdrs, i + 1);
-				} else if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV4_DST);
 					REFINE_PROTO_FLD(DEL, IPV4_SRC);
 				}
@@ -835,11 +835,11 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV4_FRAG:
 			if (rss_type &
-			    (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			     ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV4_SCTP)) {
-				if (rss_type & ETH_RSS_FRAG_IPV4)
+			    (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			     RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_FRAG_IPV4)
 					REFINE_PROTO_FLD(ADD, IPV4_FRAG_PKID);
 			} else {
 				hdr->field_selector = 0;
@@ -847,17 +847,17 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6:
 			if (rss_type &
-			    (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			     ETH_RSS_NONFRAG_IPV6_UDP |
-			     ETH_RSS_NONFRAG_IPV6_TCP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			    (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
-				} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+				} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				} else if (rss_type &
-					   (ETH_RSS_L4_SRC_ONLY |
-					    ETH_RSS_L4_DST_ONLY)) {
+					   (RTE_ETH_RSS_L4_SRC_ONLY |
+					    RTE_ETH_RSS_L4_DST_ONLY)) {
 					REFINE_PROTO_FLD(DEL, IPV6_DST);
 					REFINE_PROTO_FLD(DEL, IPV6_SRC);
 				}
@@ -874,7 +874,7 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			}
 			break;
 		case VIRTCHNL_PROTO_HDR_IPV6_EH_FRAG:
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				REFINE_PROTO_FLD(ADD, IPV6_EH_FRAG_PKID);
 			else
 				hdr->field_selector = 0;
@@ -882,15 +882,15 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_UDP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_UDP |
-			     ETH_RSS_NONFRAG_IPV6_UDP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, UDP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
@@ -898,15 +898,15 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_TCP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_TCP |
-			     ETH_RSS_NONFRAG_IPV6_TCP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, TCP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
@@ -914,46 +914,46 @@ iavf_refine_proto_hdrs_l234(struct virtchnl_proto_hdrs *proto_hdrs,
 			break;
 		case VIRTCHNL_PROTO_HDR_SCTP:
 			if (rss_type &
-			    (ETH_RSS_NONFRAG_IPV4_SCTP |
-			     ETH_RSS_NONFRAG_IPV6_SCTP)) {
-				if (rss_type & ETH_RSS_L4_SRC_ONLY)
+			    (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+			     RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+				if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_DST_PORT);
-				else if (rss_type & ETH_RSS_L4_DST_ONLY)
+				else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 					REFINE_PROTO_FLD(DEL, SCTP_SRC_PORT);
 				else if (rss_type &
-					 (ETH_RSS_L3_SRC_ONLY |
-					  ETH_RSS_L3_DST_ONLY))
+					 (RTE_ETH_RSS_L3_SRC_ONLY |
+					  RTE_ETH_RSS_L3_DST_ONLY))
 					hdr->field_selector = 0;
 			} else {
 				hdr->field_selector = 0;
 			}
 			break;
 		case VIRTCHNL_PROTO_HDR_S_VLAN:
-			if (!(rss_type & ETH_RSS_S_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_S_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_C_VLAN:
-			if (!(rss_type & ETH_RSS_C_VLAN))
+			if (!(rss_type & RTE_ETH_RSS_C_VLAN))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_L2TPV3:
-			if (!(rss_type & ETH_RSS_L2TPV3))
+			if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ESP:
-			if (!(rss_type & ETH_RSS_ESP))
+			if (!(rss_type & RTE_ETH_RSS_ESP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_AH:
-			if (!(rss_type & ETH_RSS_AH))
+			if (!(rss_type & RTE_ETH_RSS_AH))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_PFCP:
-			if (!(rss_type & ETH_RSS_PFCP))
+			if (!(rss_type & RTE_ETH_RSS_PFCP))
 				hdr->field_selector = 0;
 			break;
 		case VIRTCHNL_PROTO_HDR_ECPRI:
-			if (!(rss_type & ETH_RSS_ECPRI))
+			if (!(rss_type & RTE_ETH_RSS_ECPRI))
 				hdr->field_selector = 0;
 			break;
 		default:
@@ -970,7 +970,7 @@ iavf_refine_proto_hdrs_gtpu(struct virtchnl_proto_hdrs *proto_hdrs,
 	struct virtchnl_proto_hdr *hdr;
 	int i;
 
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	for (i = 0; i < proto_hdrs->count; i++) {
@@ -1067,10 +1067,10 @@ static void iavf_refine_proto_hdrs(struct virtchnl_proto_hdrs *proto_hdrs,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE32 | RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE48 | RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -1081,27 +1081,27 @@ struct rss_attr_type {
 	uint64_t type;
 };
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE64)
 
 #define INVALID_RSS_ATTR	(RTE_ETH_RSS_L3_PRE32	| \
@@ -1111,9 +1111,9 @@ struct rss_attr_type {
 				 RTE_ETH_RSS_L3_PRE96)
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE64,				VALID_RSS_IPV6},
 	{INVALID_RSS_ATTR,				0}
@@ -1130,15 +1130,15 @@ iavf_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index e33fe4576b6e..4ff856fc82aa 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -609,7 +609,7 @@ iavf_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
 	rxq->vsi = vsi;
 	rxq->offloads = offloads;
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
diff --git a/drivers/net/iavf/iavf_rxtx.h b/drivers/net/iavf/iavf_rxtx.h
index e210b913d633..096be81e8a69 100644
--- a/drivers/net/iavf/iavf_rxtx.h
+++ b/drivers/net/iavf/iavf_rxtx.h
@@ -24,22 +24,22 @@
 #define IAVF_VPMD_TX_MAX_FREE_BUF 64
 
 #define IAVF_TX_NO_VECTOR_FLAGS (				 \
-		DEV_TX_OFFLOAD_MULTI_SEGS |		 \
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define IAVF_TX_VECTOR_OFFLOAD (				 \
-		DEV_TX_OFFLOAD_VLAN_INSERT |		 \
-		DEV_TX_OFFLOAD_QINQ_INSERT |		 \
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		 \
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_TX_OFFLOAD_UDP_CKSUM |		 \
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		 \
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		 \
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define IAVF_RX_VECTOR_OFFLOAD (				 \
-		DEV_RX_OFFLOAD_CHECKSUM |		 \
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		 \
-		DEV_RX_OFFLOAD_VLAN |		 \
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		 \
+		RTE_ETH_RX_OFFLOAD_VLAN |		 \
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IAVF_VECTOR_PATH 0
 #define IAVF_VECTOR_OFFLOAD_PATH 1
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 475070e036ef..8f9a397e4143 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -904,7 +904,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH ||
+				RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 				rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh7 =
@@ -957,7 +957,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 					raw_desc_bh1, 1);
 
 			if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/**
 				 * to shift the 32b RSS hash value to the
 				 * highest 32b of each 128b before mask
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 571161c0cdec..2329928c62cb 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1138,7 +1138,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-			    DEV_RX_OFFLOAD_RSS_HASH ||
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH ||
 			    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
@@ -1191,7 +1191,7 @@ _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 						 raw_desc_bh1, 1);
 
 				if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-						DEV_RX_OFFLOAD_RSS_HASH) {
+						RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 					/**
 					 * to shift the 32b RSS hash value to the
 					 * highest 32b of each 128b before mask
@@ -1719,7 +1719,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
 								rte_lcore_id());
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e9055259b..58f928bdd7ca 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -818,7 +818,7 @@ _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH) {
+				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/ice/ice_dcf.c b/drivers/net/ice/ice_dcf.c
index 4c2e0c7216fd..ec53478083b4 100644
--- a/drivers/net/ice/ice_dcf.c
+++ b/drivers/net/ice/ice_dcf.c
@@ -807,7 +807,7 @@ ice_dcf_init_rss(struct ice_dcf_hw *hw)
 		PMD_DRV_LOG(DEBUG, "RSS is not supported");
 		return -ENOTSUP;
 	}
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_DRV_LOG(WARNING, "RSS is enabled by PF by default");
 		/* set all lut items to default queue */
 		memset(hw->rss_lut, 0, hw->vf_res->rss_lut_size);
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index cab7c4da8759..6226aa5a80c2 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -66,7 +66,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 	/* Check if the jumbo frame and maximum packet length are set
 	 * correctly.
 	 */
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (max_pkt_len <= ICE_ETH_MAX_LEN ||
 		    max_pkt_len > ICE_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must be "
@@ -89,7 +89,7 @@ ice_dcf_init_rxq(struct rte_eth_dev *dev, struct ice_rx_queue *rxq)
 	}
 
 	rxq->max_pkt_len = max_pkt_len;
-	if ((dev_data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) ||
+	if ((dev_data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ||
 	    (rxq->max_pkt_len + 2 * ICE_VLAN_TAG_SIZE) > buf_size) {
 		dev_data->scattered_rx = 1;
 	}
@@ -559,7 +559,7 @@ ice_dcf_dev_start(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -620,7 +620,7 @@ ice_dcf_dev_stop(struct rte_eth_dev *dev)
 	}
 
 	ice_dcf_add_del_all_mac_addr(&dcf_ad->real_hw, false);
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	ad->pf.adapter_stopped = 1;
 
 	return 0;
@@ -635,8 +635,8 @@ ice_dcf_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	return 0;
 }
@@ -658,28 +658,28 @@ ice_dcf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -896,42 +896,42 @@ ice_dcf_link_update(struct rte_eth_dev *dev,
 	 */
 	switch (hw->link_speed) {
 	case 10:
-		new_link.link_speed = ETH_SPEED_NUM_10M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case 100:
-		new_link.link_speed = ETH_SPEED_NUM_100M;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case 1000:
-		new_link.link_speed = ETH_SPEED_NUM_1G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case 10000:
-		new_link.link_speed = ETH_SPEED_NUM_10G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case 20000:
-		new_link.link_speed = ETH_SPEED_NUM_20G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case 25000:
-		new_link.link_speed = ETH_SPEED_NUM_25G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case 40000:
-		new_link.link_speed = ETH_SPEED_NUM_40G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case 50000:
-		new_link.link_speed = ETH_SPEED_NUM_50G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case 100000:
-		new_link.link_speed = ETH_SPEED_NUM_100G;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	default:
-		new_link.link_speed = ETH_SPEED_NUM_NONE;
+		new_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
-	new_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	new_link.link_status = hw->link_up ? ETH_LINK_UP :
-					     ETH_LINK_DOWN;
+	new_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	new_link.link_status = hw->link_up ? RTE_ETH_LINK_UP :
+					     RTE_ETH_LINK_DOWN;
 	new_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	return rte_eth_linkstatus_set(dev, &new_link);
 }
@@ -950,11 +950,11 @@ ice_dcf_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(parent_hw, TNL_VXLAN,
 					udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_create_tunnel(parent_hw, TNL_ECPRI,
 					udp_tunnel->udp_port);
 		break;
@@ -981,8 +981,8 @@ ice_dcf_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
-	case RTE_TUNNEL_TYPE_ECPRI:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_ECPRI:
 		ret = ice_destroy_tunnel(parent_hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index 970461f3e90a..0dac1b92bfdb 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -37,7 +37,7 @@ ice_dcf_vf_repr_dev_configure(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -45,7 +45,7 @@ ice_dcf_vf_repr_dev_start(struct rte_eth_dev *dev)
 static int
 ice_dcf_vf_repr_dev_stop(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -135,29 +135,29 @@ ice_dcf_vf_repr_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->flow_type_rss_offloads = ICE_RSS_OFFLOAD_ALL;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_RSS_HASH;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -239,9 +239,9 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		return -ENOTSUP;
 
 	/* Vlan stripping setting */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		bool enable = !!(dev_conf->rxmode.offloads &
-				 DEV_RX_OFFLOAD_VLAN_STRIP);
+				 RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (enable && repr->outer_vlan_info.port_vlan_ena) {
 			PMD_DRV_LOG(ERR,
@@ -338,7 +338,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 	if (!ice_dcf_vlan_offload_ena(repr))
 		return -ENOTSUP;
 
-	if (vlan_type != ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
 		PMD_DRV_LOG(ERR,
 			    "Can accelerate only outer VLAN in QinQ\n");
 		return -EINVAL;
@@ -368,7 +368,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
 
 	if (repr->outer_vlan_info.stripping_ena) {
 		err = ice_dcf_vf_repr_vlan_offload_set(dev,
-						       ETH_VLAN_STRIP_MASK);
+						       RTE_ETH_VLAN_STRIP_MASK);
 		if (err) {
 			PMD_DRV_LOG(ERR,
 				    "Failed to reset VLAN stripping : %d\n",
@@ -441,7 +441,7 @@ ice_dcf_vf_repr_init_vlan(struct rte_eth_dev *vf_rep_eth_dev)
 	int err;
 
 	err = ice_dcf_vf_repr_vlan_offload_set(vf_rep_eth_dev,
-					       ETH_VLAN_STRIP_MASK);
+					       RTE_ETH_VLAN_STRIP_MASK);
 	if (err) {
 		PMD_DRV_LOG(ERR, "Failed to set VLAN offload");
 		return err;
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index a4cd39c954f1..459718ad33f6 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1449,9 +1449,9 @@ ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type)
 	TAILQ_INIT(&vsi->mac_list);
 	TAILQ_INIT(&vsi->vlan_list);
 
-	/* Be sync with ETH_RSS_RETA_SIZE_x maximum value definition */
+	/* Be sync with RTE_ETH_RSS_RETA_SIZE_x maximum value definition */
 	pf->hash_lut_size = hw->func_caps.common_cap.rss_table_size >
-			ETH_RSS_RETA_SIZE_512 ? ETH_RSS_RETA_SIZE_512 :
+			RTE_ETH_RSS_RETA_SIZE_512 ? RTE_ETH_RSS_RETA_SIZE_512 :
 			hw->func_caps.common_cap.rss_table_size;
 	pf->flags |= ICE_FLAG_RSS_AQ_CAPABLE;
 
@@ -2809,16 +2809,16 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	int ret;
 
 #define ICE_RSS_HF_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_FRAG_IPV6)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV6)
 
 	ret = ice_rem_vsi_rss_cfg(hw, vsi->idx);
 	if (ret)
@@ -2828,7 +2828,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	cfg.symm = 0;
 	cfg.hdr_type = ICE_RSS_OUTER_HEADERS;
 	/* Configure RSS for IPv4 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2838,7 +2838,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for IPv6 with src/dst addr as input set */
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2848,7 +2848,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -2859,7 +2859,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for udp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_UDP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -2870,7 +2870,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -2881,7 +2881,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for tcp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_TCP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -2892,7 +2892,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp4 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV4;
@@ -2903,7 +2903,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 	}
 
 	/* Configure RSS for sctp6 with src/dst addr and port as input set */
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_SCTP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_SCTP | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_SCTP_IPV6;
@@ -2913,7 +2913,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV4 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4;
@@ -2923,7 +2923,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_IPV6 |
 				ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6;
@@ -2933,7 +2933,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV4;
@@ -2943,7 +2943,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_UDP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_UDP_IPV6;
@@ -2953,7 +2953,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV4;
@@ -2963,7 +2963,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_PPPOE | ICE_FLOW_SEG_HDR_TCP |
 				ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_OTHER;
 		cfg.hash_flds = ICE_HASH_TCP_IPV6;
@@ -2973,7 +2973,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_FRAG_IPV4) {
+	if (rss_hf & RTE_ETH_RSS_FRAG_IPV4) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV4 | ICE_FLOW_SEG_HDR_IPV_FRAG;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV4 | BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -2982,7 +2982,7 @@ ice_rss_hash_set(struct ice_pf *pf, uint64_t rss_hf)
 				    __func__, ret);
 	}
 
-	if (rss_hf & ETH_RSS_FRAG_IPV6) {
+	if (rss_hf & RTE_ETH_RSS_FRAG_IPV6) {
 		cfg.addl_hdrs = ICE_FLOW_SEG_HDR_IPV6 | ICE_FLOW_SEG_HDR_IPV_FRAG;
 		cfg.hash_flds = ICE_FLOW_HASH_IPV6 | BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
 		ret = ice_add_rss_cfg_wrap(pf, vsi->idx, &cfg);
@@ -3124,8 +3124,8 @@ ice_dev_configure(struct rte_eth_dev *dev)
 	ad->rx_bulk_alloc_allowed = true;
 	ad->tx_simple_allowed = true;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_rx_queues) {
 		ret = ice_init_rss(pf);
@@ -3344,8 +3344,8 @@ ice_dev_start(struct rte_eth_dev *dev)
 	ice_set_rx_function(dev);
 	ice_set_tx_function(dev);
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-			ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+			RTE_ETH_VLAN_EXTEND_MASK;
 	ret = ice_vlan_offload_set(dev, mask);
 	if (ret) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
@@ -3449,40 +3449,40 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_KEEP_CRC |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_VLAN_FILTER;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->flow_type_rss_offloads = 0;
 
 	if (!is_safe_mode) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM |
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_QINQ_STRIP |
-			DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_RX_OFFLOAD_VLAN_EXTEND |
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_QINQ_INSERT |
-			DEV_TX_OFFLOAD_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM |
-			DEV_TX_OFFLOAD_SCTP_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 		dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL;
 	}
 
 	dev_info->rx_queue_offload_capa = 0;
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->reta_size = pf->hash_lut_size;
 	dev_info->hash_key_size = (VSIQF_HKEY_MAX_INDEX + 1) * sizeof(uint32_t);
@@ -3521,24 +3521,24 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		.nb_align = ICE_ALIGN_RING_DESC,
 	};
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			       ETH_LINK_SPEED_100M |
-			       ETH_LINK_SPEED_1G |
-			       ETH_LINK_SPEED_2_5G |
-			       ETH_LINK_SPEED_5G |
-			       ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_20G |
-			       ETH_LINK_SPEED_25G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			       RTE_ETH_LINK_SPEED_100M |
+			       RTE_ETH_LINK_SPEED_1G |
+			       RTE_ETH_LINK_SPEED_2_5G |
+			       RTE_ETH_LINK_SPEED_5G |
+			       RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_20G |
+			       RTE_ETH_LINK_SPEED_25G;
 
 	phy_type_low = hw->port_info->phy.phy_type_low;
 	phy_type_high = hw->port_info->phy.phy_type_high;
 
 	if (ICE_PHY_TYPE_SUPPORT_50G(phy_type_low))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 
 	if (ICE_PHY_TYPE_SUPPORT_100G_LOW(phy_type_low) ||
 			ICE_PHY_TYPE_SUPPORT_100G_HIGH(phy_type_high))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->nb_rx_queues = dev->data->nb_rx_queues;
 	dev_info->nb_tx_queues = dev->data->nb_tx_queues;
@@ -3603,8 +3603,8 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		status = ice_aq_get_link_info(hw->port_info, enable_lse,
 					      &link_status, NULL);
 		if (status != ICE_SUCCESS) {
-			link.link_speed = ETH_SPEED_NUM_100M;
-			link.link_duplex = ETH_LINK_FULL_DUPLEX;
+			link.link_speed = RTE_ETH_SPEED_NUM_100M;
+			link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 			PMD_DRV_LOG(ERR, "Failed to get link info");
 			goto out;
 		}
@@ -3620,55 +3620,55 @@ ice_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		goto out;
 
 	/* Full-duplex operation at all supported speeds */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	/* Parse the link status */
 	switch (link_status.link_speed) {
 	case ICE_AQ_LINK_SPEED_10MB:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case ICE_AQ_LINK_SPEED_100MB:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case ICE_AQ_LINK_SPEED_1000MB:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case ICE_AQ_LINK_SPEED_2500MB:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_5GB:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case ICE_AQ_LINK_SPEED_10GB:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case ICE_AQ_LINK_SPEED_20GB:
-		link.link_speed = ETH_SPEED_NUM_20G;
+		link.link_speed = RTE_ETH_SPEED_NUM_20G;
 		break;
 	case ICE_AQ_LINK_SPEED_25GB:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	case ICE_AQ_LINK_SPEED_40GB:
-		link.link_speed = ETH_SPEED_NUM_40G;
+		link.link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 	case ICE_AQ_LINK_SPEED_50GB:
-		link.link_speed = ETH_SPEED_NUM_50G;
+		link.link_speed = RTE_ETH_SPEED_NUM_50G;
 		break;
 	case ICE_AQ_LINK_SPEED_100GB:
-		link.link_speed = ETH_SPEED_NUM_100G;
+		link.link_speed = RTE_ETH_SPEED_NUM_100G;
 		break;
 	case ICE_AQ_LINK_SPEED_UNKNOWN:
 		PMD_DRV_LOG(ERR, "Unknown link speed");
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 	default:
 		PMD_DRV_LOG(ERR, "None link speed");
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 	}
 
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			      ETH_LINK_SPEED_FIXED);
+			      RTE_ETH_LINK_SPEED_FIXED);
 
 out:
 	ice_atomic_write_link_status(dev, &link);
@@ -3767,10 +3767,10 @@ ice_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	if (frame_size > ICE_ETH_MAX_LEN)
 		dev_data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		dev_data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
@@ -4161,15 +4161,15 @@ ice_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ice_vsi_config_vlan_filter(vsi, true);
 		else
 			ice_vsi_config_vlan_filter(vsi, false);
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			ice_vsi_config_vlan_stripping(vsi, true);
 		else
 			ice_vsi_config_vlan_stripping(vsi, false);
@@ -4284,8 +4284,8 @@ ice_rss_reta_update(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			lut[i] = reta_conf[idx].reta[shift];
 	}
@@ -4334,8 +4334,8 @@ ice_rss_reta_query(struct rte_eth_dev *dev,
 		goto out;
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift))
 			reta_conf[idx].reta[shift] = lut[i];
 	}
@@ -5244,7 +5244,7 @@ ice_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_create_tunnel(hw, TNL_VXLAN, udp_tunnel->udp_port);
 		break;
 	default:
@@ -5268,7 +5268,7 @@ ice_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ice_destroy_tunnel(hw, udp_tunnel->udp_port, 0);
 		break;
 	default:
diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h
index b4bf651c1c7f..1c4bc4e30349 100644
--- a/drivers/net/ice/ice_ethdev.h
+++ b/drivers/net/ice/ice_ethdev.h
@@ -115,19 +115,19 @@
 		       ICE_FLAG_VF_MAC_BY_PF)
 
 #define ICE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD)
 
 /**
  * The overhead from MTU to max frame size.
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index 54d14dfcddfb..beb863f70568 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -39,27 +39,27 @@
 #define ICE_IPV4_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_PROT)
 #define ICE_IPV6_PROT		BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PROT)
 
-#define VALID_RSS_IPV4_L4	(ETH_RSS_NONFRAG_IPV4_UDP	| \
-				 ETH_RSS_NONFRAG_IPV4_TCP	| \
-				 ETH_RSS_NONFRAG_IPV4_SCTP)
+#define VALID_RSS_IPV4_L4	(RTE_ETH_RSS_NONFRAG_IPV4_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
-#define VALID_RSS_IPV6_L4	(ETH_RSS_NONFRAG_IPV6_UDP	| \
-				 ETH_RSS_NONFRAG_IPV6_TCP	| \
-				 ETH_RSS_NONFRAG_IPV6_SCTP)
+#define VALID_RSS_IPV6_L4	(RTE_ETH_RSS_NONFRAG_IPV6_UDP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	| \
+				 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
-#define VALID_RSS_IPV4		(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
+#define VALID_RSS_IPV4		(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
 				 VALID_RSS_IPV4_L4)
-#define VALID_RSS_IPV6		(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | \
+#define VALID_RSS_IPV6		(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | \
 				 VALID_RSS_IPV6_L4)
 #define VALID_RSS_L3		(VALID_RSS_IPV4 | VALID_RSS_IPV6)
 #define VALID_RSS_L4		(VALID_RSS_IPV4_L4 | VALID_RSS_IPV6_L4)
 
-#define VALID_RSS_ATTR		(ETH_RSS_L3_SRC_ONLY	| \
-				 ETH_RSS_L3_DST_ONLY	| \
-				 ETH_RSS_L4_SRC_ONLY	| \
-				 ETH_RSS_L4_DST_ONLY	| \
-				 ETH_RSS_L2_SRC_ONLY	| \
-				 ETH_RSS_L2_DST_ONLY	| \
+#define VALID_RSS_ATTR		(RTE_ETH_RSS_L3_SRC_ONLY	| \
+				 RTE_ETH_RSS_L3_DST_ONLY	| \
+				 RTE_ETH_RSS_L4_SRC_ONLY	| \
+				 RTE_ETH_RSS_L4_DST_ONLY	| \
+				 RTE_ETH_RSS_L2_SRC_ONLY	| \
+				 RTE_ETH_RSS_L2_DST_ONLY	| \
 				 RTE_ETH_RSS_L3_PRE32	| \
 				 RTE_ETH_RSS_L3_PRE48	| \
 				 RTE_ETH_RSS_L3_PRE64)
@@ -373,80 +373,80 @@ struct ice_rss_hash_cfg eth_tmplt = {
 };
 
 /* IPv4 */
-#define ICE_RSS_TYPE_ETH_IPV4		(ETH_RSS_ETH | ETH_RSS_IPV4 | \
-					 ETH_RSS_FRAG_IPV4)
+#define ICE_RSS_TYPE_ETH_IPV4		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define ICE_RSS_TYPE_ETH_IPV4_UDP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 #define ICE_RSS_TYPE_ETH_IPV4_TCP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 #define ICE_RSS_TYPE_ETH_IPV4_SCTP	(ICE_RSS_TYPE_ETH_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
-#define ICE_RSS_TYPE_IPV4		ETH_RSS_IPV4
-#define ICE_RSS_TYPE_IPV4_UDP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_UDP)
-#define ICE_RSS_TYPE_IPV4_TCP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_TCP)
-#define ICE_RSS_TYPE_IPV4_SCTP		(ETH_RSS_IPV4 | \
-					 ETH_RSS_NONFRAG_IPV4_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
+#define ICE_RSS_TYPE_IPV4		RTE_ETH_RSS_IPV4
+#define ICE_RSS_TYPE_IPV4_UDP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_UDP)
+#define ICE_RSS_TYPE_IPV4_TCP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_TCP)
+#define ICE_RSS_TYPE_IPV4_SCTP		(RTE_ETH_RSS_IPV4 | \
+					 RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 /* IPv6 */
-#define ICE_RSS_TYPE_ETH_IPV6		(ETH_RSS_ETH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(ETH_RSS_ETH | ETH_RSS_IPV6 | \
-					 ETH_RSS_FRAG_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_ETH_IPV6_FRAG	(RTE_ETH_RSS_ETH | RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_ETH_IPV6_UDP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 #define ICE_RSS_TYPE_ETH_IPV6_TCP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 #define ICE_RSS_TYPE_ETH_IPV6_SCTP	(ICE_RSS_TYPE_ETH_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
-#define ICE_RSS_TYPE_IPV6		ETH_RSS_IPV6
-#define ICE_RSS_TYPE_IPV6_UDP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_UDP)
-#define ICE_RSS_TYPE_IPV6_TCP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_TCP)
-#define ICE_RSS_TYPE_IPV6_SCTP		(ETH_RSS_IPV6 | \
-					 ETH_RSS_NONFRAG_IPV6_SCTP)
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ICE_RSS_TYPE_IPV6		RTE_ETH_RSS_IPV6
+#define ICE_RSS_TYPE_IPV6_UDP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_UDP)
+#define ICE_RSS_TYPE_IPV6_TCP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_TCP)
+#define ICE_RSS_TYPE_IPV6_SCTP		(RTE_ETH_RSS_IPV6 | \
+					 RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 /* VLAN IPV4 */
 #define ICE_RSS_TYPE_VLAN_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV4)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV4)
 #define ICE_RSS_TYPE_VLAN_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV4_SCTP	(ICE_RSS_TYPE_IPV4_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 /* VLAN IPv6 */
 #define ICE_RSS_TYPE_VLAN_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_FRAG	(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN | \
-					 ETH_RSS_FRAG_IPV6)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN | \
+					 RTE_ETH_RSS_FRAG_IPV6)
 #define ICE_RSS_TYPE_VLAN_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 #define ICE_RSS_TYPE_VLAN_IPV6_SCTP	(ICE_RSS_TYPE_IPV6_SCTP | \
-					 ETH_RSS_S_VLAN | ETH_RSS_C_VLAN)
+					 RTE_ETH_RSS_S_VLAN | RTE_ETH_RSS_C_VLAN)
 
 /* GTPU IPv4 */
 #define ICE_RSS_TYPE_GTPU_IPV4		(ICE_RSS_TYPE_IPV4 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_UDP	(ICE_RSS_TYPE_IPV4_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV4_TCP	(ICE_RSS_TYPE_IPV4_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 /* GTPU IPv6 */
 #define ICE_RSS_TYPE_GTPU_IPV6		(ICE_RSS_TYPE_IPV6 | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_UDP	(ICE_RSS_TYPE_IPV6_UDP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 #define ICE_RSS_TYPE_GTPU_IPV6_TCP	(ICE_RSS_TYPE_IPV6_TCP | \
-					 ETH_RSS_GTPU)
+					 RTE_ETH_RSS_GTPU)
 
 /* PPPOE */
-#define ICE_RSS_TYPE_PPPOE		(ETH_RSS_ETH | ETH_RSS_PPPOE)
+#define ICE_RSS_TYPE_PPPOE		(RTE_ETH_RSS_ETH | RTE_ETH_RSS_PPPOE)
 
 /* PPPOE IPv4 */
 #define ICE_RSS_TYPE_PPPOE_IPV4		(ICE_RSS_TYPE_IPV4 | \
@@ -465,17 +465,17 @@ struct ice_rss_hash_cfg eth_tmplt = {
 					 ICE_RSS_TYPE_PPPOE)
 
 /* ESP, AH, L2TPV3 and PFCP */
-#define ICE_RSS_TYPE_IPV4_ESP		(ETH_RSS_ESP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_ESP		(ETH_RSS_ESP | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_AH		(ETH_RSS_AH | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_AH		(ETH_RSS_AH | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_L2TPV3	(ETH_RSS_L2TPV3 | ETH_RSS_IPV6)
-#define ICE_RSS_TYPE_IPV4_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV4)
-#define ICE_RSS_TYPE_IPV6_PFCP		(ETH_RSS_PFCP | ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_ESP		(RTE_ETH_RSS_ESP | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_AH		(RTE_ETH_RSS_AH | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_L2TPV3	(RTE_ETH_RSS_L2TPV3 | RTE_ETH_RSS_IPV6)
+#define ICE_RSS_TYPE_IPV4_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV4)
+#define ICE_RSS_TYPE_IPV6_PFCP		(RTE_ETH_RSS_PFCP | RTE_ETH_RSS_IPV6)
 
 /* MAC */
-#define ICE_RSS_TYPE_ETH		ETH_RSS_ETH
+#define ICE_RSS_TYPE_ETH		RTE_ETH_RSS_ETH
 
 /**
  * Supported pattern for hash.
@@ -640,51 +640,51 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH) {
-		if (!(rss_type & ETH_RSS_ETH))
+		if (!(rss_type & RTE_ETH_RSS_ETH))
 			*hash_flds &= ~ICE_FLOW_HASH_ETH;
-		if (rss_type & ETH_RSS_L2_SRC_ONLY)
+		if (rss_type & RTE_ETH_RSS_L2_SRC_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_DA));
-		else if (rss_type & ETH_RSS_L2_DST_ONLY)
+		else if (rss_type & RTE_ETH_RSS_L2_DST_ONLY)
 			*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_SA));
 		*addl_hdrs &= ~ICE_FLOW_SEG_HDR_ETH;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ETH_NON_IP) {
-		if (rss_type & ETH_RSS_ETH)
+		if (rss_type & RTE_ETH_RSS_ETH)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_ETH_TYPE);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_VLAN) {
-		if (rss_type & ETH_RSS_C_VLAN)
+		if (rss_type & RTE_ETH_RSS_C_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_C_VLAN);
-		else if (rss_type & ETH_RSS_S_VLAN)
+		else if (rss_type & RTE_ETH_RSS_S_VLAN)
 			*hash_flds |= BIT_ULL(ICE_FLOW_FIELD_IDX_S_VLAN);
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PPPOE) {
-		if (!(rss_type & ETH_RSS_PPPOE))
+		if (!(rss_type & RTE_ETH_RSS_PPPOE))
 			*hash_flds &= ~ICE_FLOW_HASH_PPPOE_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV4) {
 		if (rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-		    ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV4) {
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV4) {
 				*addl_hdrs |= ICE_FLOW_SEG_HDR_IPV_FRAG;
 				*addl_hdrs &= ~(ICE_FLOW_SEG_HDR_IPV_OTHER);
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_ID);
 			}
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV4_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV4;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV4;
@@ -693,30 +693,30 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_IPV6) {
 		if (rss_type &
-		   (ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_FRAG_IPV6)
+		   (RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_FRAG_IPV6)
 				*hash_flds |=
 					BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_ID);
-			if (rss_type & ETH_RSS_L3_SRC_ONLY)
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
-			else if (rss_type & ETH_RSS_L3_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 			else if (rss_type &
-				(ETH_RSS_L4_SRC_ONLY |
-				ETH_RSS_L4_DST_ONLY))
+				(RTE_ETH_RSS_L4_SRC_ONLY |
+				RTE_ETH_RSS_L4_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_IPV6;
 		}
 
 		if (rss_type & RTE_ETH_RSS_L3_PRE32) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE32_DA));
 			} else {
@@ -725,10 +725,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE48) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE48_DA));
 			} else {
@@ -737,10 +737,10 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 			}
 		}
 		if (rss_type & RTE_ETH_RSS_L3_PRE64) {
-			if (rss_type & ETH_RSS_L3_SRC_ONLY) {
+			if (rss_type & RTE_ETH_RSS_L3_SRC_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_SA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_SA));
-			} else if (rss_type & ETH_RSS_L3_DST_ONLY) {
+			} else if (rss_type & RTE_ETH_RSS_L3_DST_ONLY) {
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_DA));
 				*hash_flds |= (BIT_ULL(ICE_FLOW_FIELD_IDX_IPV6_PRE64_DA));
 			} else {
@@ -752,15 +752,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_UDP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_UDP |
-		    ETH_RSS_NONFRAG_IPV6_UDP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_UDP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_UDP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_UDP_PORT;
@@ -769,15 +769,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_TCP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_TCP |
-		    ETH_RSS_NONFRAG_IPV6_TCP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_TCP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_TCP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_TCP_PORT;
@@ -786,15 +786,15 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_SCTP) {
 		if (rss_type &
-		   (ETH_RSS_NONFRAG_IPV4_SCTP |
-		    ETH_RSS_NONFRAG_IPV6_SCTP)) {
-			if (rss_type & ETH_RSS_L4_SRC_ONLY)
+		   (RTE_ETH_RSS_NONFRAG_IPV4_SCTP |
+		    RTE_ETH_RSS_NONFRAG_IPV6_SCTP)) {
+			if (rss_type & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_DST_PORT));
-			else if (rss_type & ETH_RSS_L4_DST_ONLY)
+			else if (rss_type & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_flds &= ~(BIT_ULL(ICE_FLOW_FIELD_IDX_SCTP_SRC_PORT));
 			else if (rss_type &
-				(ETH_RSS_L3_SRC_ONLY |
-				  ETH_RSS_L3_DST_ONLY))
+				(RTE_ETH_RSS_L3_SRC_ONLY |
+				  RTE_ETH_RSS_L3_DST_ONLY))
 				*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
 		} else {
 			*hash_flds &= ~ICE_FLOW_HASH_SCTP_PORT;
@@ -802,22 +802,22 @@ ice_refine_hash_cfg_l234(struct ice_rss_hash_cfg *hash_cfg,
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_L2TPV3) {
-		if (!(rss_type & ETH_RSS_L2TPV3))
+		if (!(rss_type & RTE_ETH_RSS_L2TPV3))
 			*hash_flds &= ~ICE_FLOW_HASH_L2TPV3_SESS_ID;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_ESP) {
-		if (!(rss_type & ETH_RSS_ESP))
+		if (!(rss_type & RTE_ETH_RSS_ESP))
 			*hash_flds &= ~ICE_FLOW_HASH_ESP_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_AH) {
-		if (!(rss_type & ETH_RSS_AH))
+		if (!(rss_type & RTE_ETH_RSS_AH))
 			*hash_flds &= ~ICE_FLOW_HASH_AH_SPI;
 	}
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_PFCP_SESSION) {
-		if (!(rss_type & ETH_RSS_PFCP))
+		if (!(rss_type & RTE_ETH_RSS_PFCP))
 			*hash_flds &= ~ICE_FLOW_HASH_PFCP_SEID;
 	}
 }
@@ -851,7 +851,7 @@ ice_refine_hash_cfg_gtpu(struct ice_rss_hash_cfg *hash_cfg,
 	uint64_t *hash_flds = &hash_cfg->hash_flds;
 
 	/* update hash field for gtpu eh/gtpu dwn/gtpu up. */
-	if (!(rss_type & ETH_RSS_GTPU))
+	if (!(rss_type & RTE_ETH_RSS_GTPU))
 		return;
 
 	if (*addl_hdrs & ICE_FLOW_SEG_HDR_GTPU_DWN)
@@ -873,10 +873,10 @@ static void ice_refine_hash_cfg(struct ice_rss_hash_cfg *hash_cfg,
 }
 
 static uint64_t invalid_rss_comb[] = {
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_UDP,
-	ETH_RSS_IPV4 | ETH_RSS_NONFRAG_IPV4_TCP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_UDP,
-	ETH_RSS_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+	RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+	RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	RTE_ETH_RSS_L3_PRE40 |
 	RTE_ETH_RSS_L3_PRE56 |
 	RTE_ETH_RSS_L3_PRE96
@@ -888,9 +888,9 @@ struct rss_attr_type {
 };
 
 static struct rss_attr_type rss_attr_to_valid_type[] = {
-	{ETH_RSS_L2_SRC_ONLY | ETH_RSS_L2_DST_ONLY,	ETH_RSS_ETH},
-	{ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
-	{ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
+	{RTE_ETH_RSS_L2_SRC_ONLY | RTE_ETH_RSS_L2_DST_ONLY,	RTE_ETH_RSS_ETH},
+	{RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY,	VALID_RSS_L3},
+	{RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY,	VALID_RSS_L4},
 	/* current ipv6 prefix only supports prefix 64 bits*/
 	{RTE_ETH_RSS_L3_PRE32,				VALID_RSS_IPV6},
 	{RTE_ETH_RSS_L3_PRE48,				VALID_RSS_IPV6},
@@ -909,16 +909,16 @@ ice_any_invalid_rss_type(enum rte_eth_hash_function rss_func,
 	 * hash function.
 	 */
 	if (rss_func == RTE_ETH_HASH_FUNCTION_SYMMETRIC_TOEPLITZ) {
-		if (rss_type & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY |
-		    ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY))
+		if (rss_type & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY |
+		    RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY))
 			return true;
 
 		if (!(rss_type &
-		   (ETH_RSS_IPV4 | ETH_RSS_IPV6 |
-		    ETH_RSS_FRAG_IPV4 | ETH_RSS_FRAG_IPV6 |
-		    ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP |
-		    ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV6_TCP |
-		    ETH_RSS_NONFRAG_IPV4_SCTP | ETH_RSS_NONFRAG_IPV6_SCTP)))
+		   (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6 |
+		    RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_FRAG_IPV6 |
+		    RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+		    RTE_ETH_RSS_NONFRAG_IPV4_SCTP | RTE_ETH_RSS_NONFRAG_IPV6_SCTP)))
 			return true;
 	}
 
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 5d7ab4f047ee..63c07e001f07 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -280,7 +280,7 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq)
 				   ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len,
 				   dev_data->dev_conf.rxmode.max_rx_pkt_len);
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (rxq->max_pkt_len <= ICE_ETH_MAX_LEN ||
 		    rxq->max_pkt_len > ICE_FRAME_SIZE_MAX) {
 			PMD_DRV_LOG(ERR, "maximum packet length must "
@@ -1103,7 +1103,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev,
 
 	rxq->reg_idx = vsi->base_queue + queue_idx;
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2780,7 +2780,7 @@ ice_tx_free_bufs(struct ice_tx_queue *txq)
 	for (i = 0; i < txq->tx_rs_thresh; i++)
 		rte_prefetch0((txep + i)->mbuf);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) {
 		for (i = 0; i < txq->tx_rs_thresh; ++i, ++txep) {
 			rte_mempool_put(txep->mbuf->pool, txep->mbuf);
 			txep->mbuf = NULL;
@@ -3254,7 +3254,7 @@ ice_set_tx_function_flag(struct rte_eth_dev *dev, struct ice_tx_queue *txq)
 	/* Use a simple Tx queue if possible (only fast free is allowed) */
 	ad->tx_simple_allowed =
 		(txq->offloads ==
-		(txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		(txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
 		txq->tx_rs_thresh >= ICE_TX_MAX_BURST);
 
 	if (ad->tx_simple_allowed)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx2.c b/drivers/net/ice/ice_rxtx_vec_avx2.c
index 9725ac018043..8c870354619e 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx2.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx2.c
@@ -473,7 +473,7 @@ _ice_recv_raw_pkts_vec_avx2(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 5bba9887d296..6d2038975830 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -584,7 +584,7 @@ _ice_recv_raw_pkts_vec_avx512(struct ice_rx_queue *rxq,
 			 * will cause performance drop to get into this context.
 			 */
 			if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-					DEV_RX_OFFLOAD_RSS_HASH) {
+					RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 				/* load bottom half of every 32B desc */
 				const __m128i raw_desc_bh7 =
 					_mm_load_si128
@@ -994,7 +994,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
 	txep = (void *)txq->sw_ring;
 	txep += txq->tx_next_dd - (n - 1);
 
-	if (txq->offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE && (n & 31) == 0) {
 		struct rte_mempool *mp = txep[0].mbuf->pool;
 		void **cache_objs;
 		struct rte_mempool_cache *cache = rte_mempool_default_cache(mp,
diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h
index 2d8ef7dc8a93..a5b573c22da2 100644
--- a/drivers/net/ice/ice_rxtx_vec_common.h
+++ b/drivers/net/ice/ice_rxtx_vec_common.h
@@ -248,23 +248,23 @@ ice_rxq_vec_setup_default(struct ice_rx_queue *rxq)
 }
 
 #define ICE_TX_NO_VECTOR_FLAGS (			\
-		DEV_TX_OFFLOAD_MULTI_SEGS |		\
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
-		DEV_TX_OFFLOAD_TCP_TSO)
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |		\
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |	\
+		RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 #define ICE_TX_VECTOR_OFFLOAD (				\
-		DEV_TX_OFFLOAD_VLAN_INSERT |		\
-		DEV_TX_OFFLOAD_QINQ_INSERT |		\
-		DEV_TX_OFFLOAD_IPV4_CKSUM |		\
-		DEV_TX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_TX_OFFLOAD_UDP_CKSUM |		\
-		DEV_TX_OFFLOAD_TCP_CKSUM)
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |		\
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |		\
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 
 #define ICE_RX_VECTOR_OFFLOAD (				\
-		DEV_RX_OFFLOAD_CHECKSUM |		\
-		DEV_RX_OFFLOAD_SCTP_CKSUM |		\
-		DEV_RX_OFFLOAD_VLAN |			\
-		DEV_RX_OFFLOAD_RSS_HASH)
+		RTE_ETH_RX_OFFLOAD_CHECKSUM |		\
+		RTE_ETH_RX_OFFLOAD_SCTP_CKSUM |		\
+		RTE_ETH_RX_OFFLOAD_VLAN |			\
+		RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define ICE_VECTOR_PATH		0
 #define ICE_VECTOR_OFFLOAD_PATH	1
diff --git a/drivers/net/ice/ice_rxtx_vec_sse.c b/drivers/net/ice/ice_rxtx_vec_sse.c
index 653bd28b417c..117494131f32 100644
--- a/drivers/net/ice/ice_rxtx_vec_sse.c
+++ b/drivers/net/ice/ice_rxtx_vec_sse.c
@@ -479,7 +479,7 @@ _ice_recv_raw_pkts_vec(struct ice_rx_queue *rxq, struct rte_mbuf **rx_pkts,
 		 * will cause performance drop to get into this context.
 		 */
 		if (rxq->vsi->adapter->pf.dev_data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_RSS_HASH) {
+				RTE_ETH_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =
 				_mm_load_si128
diff --git a/drivers/net/igc/igc_ethdev.c b/drivers/net/igc/igc_ethdev.c
index 224a0954836b..c75b06cae1fe 100644
--- a/drivers/net/igc/igc_ethdev.c
+++ b/drivers/net/igc/igc_ethdev.c
@@ -314,8 +314,8 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rx_mq_mode != ETH_MQ_RX_NONE &&
-		rx_mq_mode != ETH_MQ_RX_RSS) {
+	if (rx_mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rx_mq_mode != RTE_ETH_MQ_RX_RSS) {
 		/* RSS together with VMDq not supported*/
 		PMD_INIT_LOG(ERR, "RX mode %d is not supported.",
 				rx_mq_mode);
@@ -325,7 +325,7 @@ igc_check_mq_mode(struct rte_eth_dev *dev)
 	/* To no break software that set invalid mode, only display
 	 * warning if invalid mode is used.
 	 */
-	if (tx_mq_mode != ETH_MQ_TX_NONE)
+	if (tx_mq_mode != RTE_ETH_MQ_TX_NONE)
 		PMD_INIT_LOG(WARNING,
 			"TX mode %d is not supported. Due to meaningless in this driver, just ignore",
 			tx_mq_mode);
@@ -341,8 +341,8 @@ eth_igc_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	ret  = igc_check_mq_mode(dev);
 	if (ret != 0)
@@ -480,12 +480,12 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		uint16_t duplex, speed;
 		hw->mac.ops.get_link_up_info(hw, &speed, &duplex);
 		link.link_duplex = (duplex == FULL_DUPLEX) ?
-				ETH_LINK_FULL_DUPLEX :
-				ETH_LINK_HALF_DUPLEX;
+				RTE_ETH_LINK_FULL_DUPLEX :
+				RTE_ETH_LINK_HALF_DUPLEX;
 		link.link_speed = speed;
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 		link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 		if (speed == SPEED_2500) {
 			uint32_t tipg = IGC_READ_REG(hw, IGC_TIPG);
@@ -497,9 +497,9 @@ eth_igc_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		}
 	} else {
 		link.link_speed = 0;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_status = ETH_LINK_DOWN;
-		link.link_autoneg = ETH_LINK_FIXED;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_autoneg = RTE_ETH_LINK_FIXED;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -532,7 +532,7 @@ eth_igc_interrupt_action(struct rte_eth_dev *dev)
 				" Port %d: Link Up - speed %u Mbps - %s",
 				dev->data->port_id,
 				(unsigned int)link.link_speed,
-				link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+				link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 				"full-duplex" : "half-duplex");
 		else
 			PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -979,18 +979,18 @@ eth_igc_start(struct rte_eth_dev *dev)
 
 	/* VLAN Offload Settings */
 	eth_igc_vlan_offload_set(dev,
-		ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK);
+		RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK);
 
 	/* Setup link speed and duplex */
 	speeds = &dev->data->dev_conf.link_speeds;
-	if (*speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		hw->phy.autoneg_advertised = IGC_ALL_SPEED_DUPLEX_2500;
 		hw->mac.autoneg = 1;
 	} else {
 		int num_speeds = 0;
 
-		if (*speeds & ETH_LINK_SPEED_FIXED) {
+		if (*speeds & RTE_ETH_LINK_SPEED_FIXED) {
 			PMD_DRV_LOG(ERR,
 				    "Force speed mode currently not supported");
 			igc_dev_clear_queues(dev);
@@ -1000,33 +1000,33 @@ eth_igc_start(struct rte_eth_dev *dev)
 		hw->phy.autoneg_advertised = 0;
 		hw->mac.autoneg = 1;
 
-		if (*speeds & ~(ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G)) {
+		if (*speeds & ~(RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G)) {
 			num_speeds = -1;
 			goto error_invalid_config;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_10M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_10M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_10_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M_HD) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M_HD) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_HALF;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_100M) {
+		if (*speeds & RTE_ETH_LINK_SPEED_100M) {
 			hw->phy.autoneg_advertised |= ADVERTISE_100_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_1G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_1G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_1000_FULL;
 			num_speeds++;
 		}
-		if (*speeds & ETH_LINK_SPEED_2_5G) {
+		if (*speeds & RTE_ETH_LINK_SPEED_2_5G) {
 			hw->phy.autoneg_advertised |= ADVERTISE_2500_FULL;
 			num_speeds++;
 		}
@@ -1490,14 +1490,14 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.rar_entry_count;
 	dev_info->rx_offload_capa = IGC_RX_OFFLOAD_ALL;
 	dev_info->tx_offload_capa = IGC_TX_OFFLOAD_ALL;
-	dev_info->rx_queue_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	dev_info->max_rx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_tx_queues = IGC_QUEUE_PAIRS_NUM;
 	dev_info->max_vmdq_pools = 0;
 
 	dev_info->hash_key_size = IGC_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = IGC_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1523,9 +1523,9 @@ eth_igc_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10M_HD | ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M_HD | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_2_5G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M_HD | RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M_HD | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_2_5G;
 
 	dev_info->max_mtu = dev_info->max_rx_pktlen - IGC_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
@@ -1603,11 +1603,11 @@ eth_igc_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* switch to jumbo mode if needed */
 	if (mtu > RTE_ETHER_MTU) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl |= IGC_RCTL_LPE;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		rctl &= ~IGC_RCTL_LPE;
 	}
 	IGC_WRITE_REG(hw, IGC_RCTL, rctl);
@@ -2165,13 +2165,13 @@ eth_igc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		rx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2203,16 +2203,16 @@ eth_igc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		hw->fc.requested_mode = igc_fc_none;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		hw->fc.requested_mode = igc_fc_rx_pause;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		hw->fc.requested_mode = igc_fc_tx_pause;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		hw->fc.requested_mode = igc_fc_full;
 		break;
 	default:
@@ -2258,29 +2258,29 @@ eth_igc_rss_reta_update(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* set redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta, reg;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to update the register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* check mask whether need to read the register value first */
@@ -2314,29 +2314,29 @@ eth_igc_rss_reta_query(struct rte_eth_dev *dev,
 	struct igc_hw *hw = IGC_DEV_PRIVATE_HW(dev);
 	uint16_t i;
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR,
 			"The size of RSS redirection table configured(%d) doesn't match the number hardware can supported(%d)",
-			reta_size, ETH_RSS_RETA_SIZE_128);
+			reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
-	RTE_BUILD_BUG_ON(ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
+	RTE_BUILD_BUG_ON(RTE_ETH_RSS_RETA_SIZE_128 % IGC_RSS_RDT_REG_SIZE);
 
 	/* read redirection table */
-	for (i = 0; i < ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
+	for (i = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i += IGC_RSS_RDT_REG_SIZE) {
 		union igc_rss_reta_reg reta;
 		uint16_t idx, shift;
 		uint8_t j, mask;
 
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 				IGC_RSS_RDT_REG_SIZE_MASK);
 
 		/* if no need to read register */
 		if (!mask ||
-		    shift > (RTE_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
+		    shift > (RTE_ETH_RETA_GROUP_SIZE - IGC_RSS_RDT_REG_SIZE))
 			continue;
 
 		/* read register and get the queue index */
@@ -2393,23 +2393,23 @@ eth_igc_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_hf = 0;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_TCP_EX)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IGC_MRQC_RSS_FIELD_IPV6_UDP_EX)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 
 	rss_conf->rss_hf |= rss_hf;
 	return 0;
@@ -2495,7 +2495,7 @@ igc_vlan_hw_extend_disable(struct rte_eth_dev *dev)
 		return 0;
 
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0)
 		goto write_ext_vlan;
 
 	/* Update maximum packet length */
@@ -2528,7 +2528,7 @@ igc_vlan_hw_extend_enable(struct rte_eth_dev *dev)
 		return 0;
 
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_JUMBO_FRAME) == 0)
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) == 0)
 		goto write_ext_vlan;
 
 	/* Update maximum packet length */
@@ -2554,22 +2554,22 @@ eth_igc_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			igc_vlan_hw_strip_enable(dev);
 		else
 			igc_vlan_hw_strip_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			igc_vlan_hw_filter_enable(dev);
 		else
 			igc_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			return igc_vlan_hw_extend_enable(dev);
 		else
 			return igc_vlan_hw_extend_disable(dev);
@@ -2587,7 +2587,7 @@ eth_igc_vlan_tpid_set(struct rte_eth_dev *dev,
 	uint32_t reg_val;
 
 	/* only outer TPID of double VLAN can be configured*/
-	if (vlan_type == ETH_VLAN_TYPE_OUTER) {
+	if (vlan_type == RTE_ETH_VLAN_TYPE_OUTER) {
 		reg_val = IGC_READ_REG(hw, IGC_VET);
 		reg_val = (reg_val & (~IGC_VET_EXT)) |
 			((uint32_t)tpid << IGC_VET_EXT_SHIFT);
diff --git a/drivers/net/igc/igc_ethdev.h b/drivers/net/igc/igc_ethdev.h
index 7b6c209df3b6..066792b8a2d8 100644
--- a/drivers/net/igc/igc_ethdev.h
+++ b/drivers/net/igc/igc_ethdev.h
@@ -59,38 +59,38 @@ extern "C" {
 #define IGC_TX_MAX_MTU_SEG	UINT8_MAX
 
 #define IGC_RX_OFFLOAD_ALL	(    \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_VLAN_FILTER | \
-	DEV_RX_OFFLOAD_VLAN_EXTEND | \
-	DEV_RX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_RX_OFFLOAD_UDP_CKSUM   | \
-	DEV_RX_OFFLOAD_TCP_CKSUM   | \
-	DEV_RX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_RX_OFFLOAD_JUMBO_FRAME | \
-	DEV_RX_OFFLOAD_KEEP_CRC    | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+	RTE_ETH_RX_OFFLOAD_KEEP_CRC    | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define IGC_TX_OFFLOAD_ALL	(    \
-	DEV_TX_OFFLOAD_VLAN_INSERT | \
-	DEV_TX_OFFLOAD_IPV4_CKSUM  | \
-	DEV_TX_OFFLOAD_UDP_CKSUM   | \
-	DEV_TX_OFFLOAD_TCP_CKSUM   | \
-	DEV_TX_OFFLOAD_SCTP_CKSUM  | \
-	DEV_TX_OFFLOAD_TCP_TSO     | \
-	DEV_TX_OFFLOAD_UDP_TSO	   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM   | \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  | \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO     | \
+	RTE_ETH_TX_OFFLOAD_UDP_TSO	   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define IGC_RSS_OFFLOAD_ALL	(    \
-	ETH_RSS_IPV4               | \
-	ETH_RSS_NONFRAG_IPV4_TCP   | \
-	ETH_RSS_NONFRAG_IPV4_UDP   | \
-	ETH_RSS_IPV6               | \
-	ETH_RSS_NONFRAG_IPV6_TCP   | \
-	ETH_RSS_NONFRAG_IPV6_UDP   | \
-	ETH_RSS_IPV6_EX            | \
-	ETH_RSS_IPV6_TCP_EX        | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4               | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP   | \
+	RTE_ETH_RSS_IPV6               | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP   | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP   | \
+	RTE_ETH_RSS_IPV6_EX            | \
+	RTE_ETH_RSS_IPV6_TCP_EX        | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IGC_MAX_ETQF_FILTERS		3	/* etqf(3) is used for 1588 */
 #define IGC_ETQF_FILTER_1588		3
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index b5489eedd220..82e7e084b41d 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -127,7 +127,7 @@ struct igc_rx_queue {
 	uint8_t             crc_len;    /**< 0 if CRC stripped, 4 otherwise. */
 	uint8_t             drop_en;	/**< If not 0, set SRRCTL.Drop_En. */
 	uint32_t            flags;      /**< RX flags. */
-	uint64_t	    offloads;   /**< offloads of DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads;   /**< offloads of RTE_ETH_RX_OFFLOAD_* */
 };
 
 /** Offload features */
@@ -209,7 +209,7 @@ struct igc_tx_queue {
 	/**< Start context position for transmit queue. */
 	struct igc_advctx_info ctx_cache[IGC_CTX_NUM];
 	/**< Hardware context history.*/
-	uint64_t	       offloads; /**< offloads of DEV_TX_OFFLOAD_* */
+	uint64_t	       offloads; /**< offloads of RTE_ETH_TX_OFFLOAD_* */
 };
 
 static inline uint64_t
@@ -866,23 +866,23 @@ igc_hw_rss_hash_set(struct igc_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IGC_MRQC_ENABLE_RSS_4Q; /* RSS enabled. */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_TCP_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IGC_MRQC_RSS_FIELD_IPV6_UDP_EX;
 	IGC_WRITE_REG(hw, IGC_MRQC, mrqc);
 }
@@ -1056,10 +1056,10 @@ igc_dev_mq_rx_configure(struct rte_eth_dev *dev)
 	}
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		igc_rss_configure(dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/*
 		 * configure RSS register for following,
 		 * then disable the RSS logic
@@ -1099,7 +1099,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 	IGC_WRITE_REG(hw, IGC_RCTL, rctl & ~IGC_RCTL_EN);
 
 	/* Configure support of jumbo frames, if any. */
-	if (offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		rctl |= IGC_RCTL_LPE;
 
 		/*
@@ -1130,7 +1130,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure
 		 */
-		rxq->crc_len = (offloads & DEV_RX_OFFLOAD_KEEP_CRC) ?
+		rxq->crc_len = (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) ?
 				RTE_ETHER_CRC_LEN : 0;
 
 		bus_addr = rxq->rx_ring_phys_addr;
@@ -1196,7 +1196,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RXDCTL(rxq->reg_idx), rxdctl);
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	if (dev->data->scattered_rx) {
@@ -1240,20 +1240,20 @@ igc_rx_init(struct rte_eth_dev *dev)
 	rxcsum |= IGC_RXCSUM_PCSD;
 
 	/* Enable both L3/L4 rx checksum offload */
-	if (offloads & DEV_RX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM)
 		rxcsum |= IGC_RXCSUM_IPOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_IPOFL;
 
 	if (offloads &
-		(DEV_RX_OFFLOAD_TCP_CKSUM | DEV_RX_OFFLOAD_UDP_CKSUM)) {
+		(RTE_ETH_RX_OFFLOAD_TCP_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		rxcsum |= IGC_RXCSUM_TUOFL;
-		offloads |= DEV_RX_OFFLOAD_SCTP_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_SCTP_CKSUM;
 	} else {
 		rxcsum &= ~IGC_RXCSUM_TUOFL;
 	}
 
-	if (offloads & DEV_RX_OFFLOAD_SCTP_CKSUM)
+	if (offloads & RTE_ETH_RX_OFFLOAD_SCTP_CKSUM)
 		rxcsum |= IGC_RXCSUM_CRCOFL;
 	else
 		rxcsum &= ~IGC_RXCSUM_CRCOFL;
@@ -1261,7 +1261,7 @@ igc_rx_init(struct rte_eth_dev *dev)
 	IGC_WRITE_REG(hw, IGC_RXCSUM, rxcsum);
 
 	/* Setup the Receive Control Register. */
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rctl &= ~IGC_RCTL_SECRC; /* Do not Strip Ethernet CRC. */
 	else
 		rctl |= IGC_RCTL_SECRC; /* Strip Ethernet CRC. */
@@ -1298,12 +1298,12 @@ igc_rx_init(struct rte_eth_dev *dev)
 		IGC_WRITE_REG(hw, IGC_RDT(rxq->reg_idx), rxq->nb_rx_desc - 1);
 
 		dvmolr = IGC_READ_REG(hw, IGC_DVMOLR(rxq->reg_idx));
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			dvmolr |= IGC_DVMOLR_STRVLAN;
 		else
 			dvmolr &= ~IGC_DVMOLR_STRVLAN;
 
-		if (offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			dvmolr &= ~IGC_DVMOLR_STRCRC;
 		else
 			dvmolr |= IGC_DVMOLR_STRCRC;
@@ -2272,10 +2272,10 @@ eth_igc_vlan_strip_queue_set(struct rte_eth_dev *dev,
 	reg_val = IGC_READ_REG(hw, IGC_DVMOLR(rx_queue_id));
 	if (on) {
 		reg_val |= IGC_DVMOLR_STRVLAN;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		reg_val &= ~(IGC_DVMOLR_STRVLAN | IGC_DVMOLR_HIDVLAN);
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	IGC_WRITE_REG(hw, IGC_DVMOLR(rx_queue_id), reg_val);
diff --git a/drivers/net/ionic/ionic_ethdev.c b/drivers/net/ionic/ionic_ethdev.c
index e6207939665e..5e7c22c339d1 100644
--- a/drivers/net/ionic/ionic_ethdev.c
+++ b/drivers/net/ionic/ionic_ethdev.c
@@ -280,37 +280,37 @@ ionic_dev_link_update(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(link));
 
 	if (adapter->idev.port_info->config.an_enable) {
-		link.link_autoneg = ETH_LINK_AUTONEG;
+		link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	}
 
 	if (!adapter->link_up ||
 	    !(lif->state & IONIC_LIF_F_UP)) {
 		/* Interface is down */
-		link.link_status = ETH_LINK_DOWN;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else {
 		/* Interface is up */
-		link.link_status = ETH_LINK_UP;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_status = RTE_ETH_LINK_UP;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		switch (adapter->link_speed) {
 		case  10000:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case  25000:
-			link.link_speed = ETH_SPEED_NUM_25G;
+			link.link_speed = RTE_ETH_SPEED_NUM_25G;
 			break;
 		case  40000:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case  50000:
-			link.link_speed = ETH_SPEED_NUM_50G;
+			link.link_speed = RTE_ETH_SPEED_NUM_50G;
 			break;
 		case 100000:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -397,17 +397,17 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->flow_type_rss_offloads = IONIC_ETH_RSS_OFFLOAD_ALL;
 
 	dev_info->speed_capa =
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	/*
 	 * Per-queue capabilities
 	 * RTE does not support disabling a feature on a queue if it is
 	 * enabled globally on the device. Thus the driver does not advertise
-	 * capabilities like DEV_TX_OFFLOAD_IPV4_CKSUM as per-queue even
+	 * capabilities like RTE_ETH_TX_OFFLOAD_IPV4_CKSUM as per-queue even
 	 * though the driver would be otherwise capable of disabling it on
 	 * a per-queue basis.
 	 */
@@ -421,25 +421,25 @@ ionic_dev_info_get(struct rte_eth_dev *eth_dev,
 	 */
 
 	dev_info->rx_offload_capa = dev_info->rx_queue_offload_capa |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_JUMBO_FRAME |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_SCATTER |
-		DEV_RX_OFFLOAD_RSS_HASH |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_SCATTER |
+		RTE_ETH_RX_OFFLOAD_RSS_HASH |
 		0;
 
 	dev_info->tx_offload_capa = dev_info->tx_queue_offload_capa |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
 		0;
 
 	dev_info->rx_desc_lim = rx_desc_lim;
@@ -474,9 +474,9 @@ ionic_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		fc_conf->autoneg = 0;
 
 		if (idev->port_info->config.pause_type)
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf->mode = RTE_FC_NONE;
+			fc_conf->mode = RTE_ETH_FC_NONE;
 	}
 
 	return 0;
@@ -498,14 +498,14 @@ ionic_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		pause_type = IONIC_PORT_PAUSE_TYPE_NONE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_type = IONIC_PORT_PAUSE_TYPE_LINK;
 		break;
-	case RTE_FC_RX_PAUSE:
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		return -ENOTSUP;
 	}
 
@@ -556,12 +556,12 @@ ionic_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = tbl_sz / RTE_RETA_GROUP_SIZE;
+	num = tbl_sz / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if (reta_conf[i].mask & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				lif->rss_ind_tbl[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -596,12 +596,12 @@ ionic_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-			&lif->rss_ind_tbl[i * RTE_RETA_GROUP_SIZE],
-			RTE_RETA_GROUP_SIZE);
+			&lif->rss_ind_tbl[i * RTE_ETH_RETA_GROUP_SIZE],
+			RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -629,17 +629,17 @@ ionic_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 			IONIC_RSS_HASH_KEY_SIZE);
 
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (lif->rss_types & IONIC_RSS_TYPE_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -671,17 +671,17 @@ ionic_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (!lif->rss_ind_tbl)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 			rss_types |= IONIC_RSS_TYPE_IPV4;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV4_UDP;
-		if (rss_conf->rss_hf & ETH_RSS_IPV6)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 			rss_types |= IONIC_RSS_TYPE_IPV6;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_TCP;
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 			rss_types |= IONIC_RSS_TYPE_IPV6_UDP;
 
 		ionic_lif_rss_config(lif, rss_types, key, NULL);
@@ -853,15 +853,15 @@ ionic_dev_configure(struct rte_eth_dev *eth_dev)
 static inline uint32_t
 ionic_parse_link_speeds(uint16_t link_speeds)
 {
-	if (link_speeds & ETH_LINK_SPEED_100G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 		return 100000;
-	else if (link_speeds & ETH_LINK_SPEED_50G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 		return 50000;
-	else if (link_speeds & ETH_LINK_SPEED_40G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		return 40000;
-	else if (link_speeds & ETH_LINK_SPEED_25G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		return 25000;
-	else if (link_speeds & ETH_LINK_SPEED_10G)
+	else if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		return 10000;
 	else
 		return 0;
@@ -885,12 +885,12 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	IONIC_PRINT_CALL();
 
 	allowed_speeds =
-		ETH_LINK_SPEED_FIXED |
-		ETH_LINK_SPEED_10G |
-		ETH_LINK_SPEED_25G |
-		ETH_LINK_SPEED_40G |
-		ETH_LINK_SPEED_50G |
-		ETH_LINK_SPEED_100G;
+		RTE_ETH_LINK_SPEED_FIXED |
+		RTE_ETH_LINK_SPEED_10G |
+		RTE_ETH_LINK_SPEED_25G |
+		RTE_ETH_LINK_SPEED_40G |
+		RTE_ETH_LINK_SPEED_50G |
+		RTE_ETH_LINK_SPEED_100G;
 
 	if (dev_conf->link_speeds & ~allowed_speeds) {
 		IONIC_PRINT(ERR, "Invalid link setting");
@@ -907,7 +907,7 @@ ionic_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure link */
-	an_enable = (dev_conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+	an_enable = (dev_conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 	ionic_dev_cmd_port_autoneg(idev, an_enable);
 	err = ionic_dev_cmd_wait_check(idev, IONIC_DEVCMD_TIMEOUT);
diff --git a/drivers/net/ionic/ionic_ethdev.h b/drivers/net/ionic/ionic_ethdev.h
index 6cbcd0f825a3..652f28c97d57 100644
--- a/drivers/net/ionic/ionic_ethdev.h
+++ b/drivers/net/ionic/ionic_ethdev.h
@@ -8,12 +8,12 @@
 #include <rte_ethdev.h>
 
 #define IONIC_ETH_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define IONIC_ETH_DEV_TO_LIF(eth_dev) ((struct ionic_lif *) \
 	(eth_dev)->data->dev_private)
diff --git a/drivers/net/ionic/ionic_lif.c b/drivers/net/ionic/ionic_lif.c
index 431eda777b78..d4eb6c1d78be 100644
--- a/drivers/net/ionic/ionic_lif.c
+++ b/drivers/net/ionic/ionic_lif.c
@@ -1688,12 +1688,12 @@ ionic_lif_configure_vlan_offload(struct ionic_lif *lif, int mask)
 
 	/*
 	 * IONIC_ETH_HW_VLAN_RX_FILTER cannot be turned off, so
-	 * set DEV_RX_OFFLOAD_VLAN_FILTER and ignore ETH_VLAN_FILTER_MASK
+	 * set RTE_ETH_RX_OFFLOAD_VLAN_FILTER and ignore RTE_ETH_VLAN_FILTER_MASK
 	 */
-	rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			lif->features |= IONIC_ETH_HW_VLAN_RX_STRIP;
 		else
 			lif->features &= ~IONIC_ETH_HW_VLAN_RX_STRIP;
@@ -1733,19 +1733,19 @@ ionic_lif_configure(struct ionic_lif *lif)
 	/*
 	 * NB: While it is true that RSS_HASH is always enabled on ionic,
 	 *     setting this flag unconditionally causes problems in DTS.
-	 * rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	 * rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 	 */
 
 	/* RX per-port */
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_UDP_CKSUM ||
-	    rxmode->offloads & DEV_RX_OFFLOAD_TCP_CKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_UDP_CKSUM ||
+	    rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 		lif->features |= IONIC_ETH_HW_RX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_RX_CSUM;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		lif->features |= IONIC_ETH_HW_RX_SG;
 		lif->eth_dev->data->scattered_rx = 1;
 	} else {
@@ -1754,30 +1754,30 @@ ionic_lif_configure(struct ionic_lif *lif)
 	}
 
 	/* Covers VLAN_STRIP */
-	ionic_lif_configure_vlan_offload(lif, ETH_VLAN_STRIP_MASK);
+	ionic_lif_configure_vlan_offload(lif, RTE_ETH_VLAN_STRIP_MASK);
 
 	/* TX per-port */
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		lif->features |= IONIC_ETH_HW_TX_CSUM;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_CSUM;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		lif->features |= IONIC_ETH_HW_VLAN_TX_TAG;
 	else
 		lif->features &= ~IONIC_ETH_HW_VLAN_TX_TAG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		lif->features |= IONIC_ETH_HW_TX_SG;
 	else
 		lif->features &= ~IONIC_ETH_HW_TX_SG;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		lif->features |= IONIC_ETH_HW_TSO;
 		lif->features |= IONIC_ETH_HW_TSO_IPV6;
 		lif->features |= IONIC_ETH_HW_TSO_ECN;
diff --git a/drivers/net/ionic/ionic_rxtx.c b/drivers/net/ionic/ionic_rxtx.c
index b83ea1bcaa6a..0c1f6113d0e9 100644
--- a/drivers/net/ionic/ionic_rxtx.c
+++ b/drivers/net/ionic/ionic_rxtx.c
@@ -204,11 +204,11 @@ ionic_dev_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t tx_queue_id,
 		txq->flags |= IONIC_QCQ_F_DEFERRED;
 
 	/* Convert the offload flags into queue flags */
-	if (offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_L3;
-	if (offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_TCP;
-	if (offloads & DEV_TX_OFFLOAD_UDP_CKSUM)
+	if (offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)
 		txq->flags |= IONIC_QCQ_F_CSUM_UDP;
 
 	eth_dev->data->tx_queues[tx_queue_id] = txq;
@@ -745,11 +745,11 @@ ionic_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
 
 	/*
 	 * Note: the interface does not currently support
-	 * DEV_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
+	 * RTE_ETH_RX_OFFLOAD_KEEP_CRC, please also consider ETHER_CRC_LEN
 	 * when the adapter will be able to keep the CRC and subtract
 	 * it to the length for all received packets:
 	 * if (eth_dev->data->dev_conf.rxmode.offloads &
-	 *     DEV_RX_OFFLOAD_KEEP_CRC)
+	 *     RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 	 *   rxq->crc_len = ETHER_CRC_LEN;
 	 */
 
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 589d9fa5877d..2f6df2c2f6b8 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -50,11 +50,11 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->speed_capa =
 		(hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_10GE_XFI) ?
-		ETH_LINK_SPEED_10G :
+		RTE_ETH_LINK_SPEED_10G :
 		((hw->retimer.mac_type ==
 			IFPGA_RAWDEV_RETIMER_MAC_TYPE_25GE_25GAUI) ?
-		ETH_LINK_SPEED_25G :
-		ETH_LINK_SPEED_AUTONEG);
+		RTE_ETH_LINK_SPEED_25G :
+		RTE_ETH_LINK_SPEED_AUTONEG);
 
 	dev_info->max_rx_queues  = 1;
 	dev_info->max_tx_queues  = 1;
@@ -67,31 +67,31 @@ ipn3ke_rpst_dev_infos_get(struct rte_eth_dev *ethdev,
 	};
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->rx_offload_capa =
-		DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_QINQ_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM |
-		DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_VLAN_EXTEND |
-		DEV_RX_OFFLOAD_VLAN_FILTER |
-		DEV_RX_OFFLOAD_JUMBO_FRAME;
-
-	dev_info->tx_queue_offload_capa = DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_VLAN_EXTEND |
+		RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+
+	dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	dev_info->tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_QINQ_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM |
-		DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO |
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		DEV_TX_OFFLOAD_GRE_TNL_TSO |
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO |
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		DEV_TX_OFFLOAD_MULTI_SEGS |
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_QINQ_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO |
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
 		dev_info->tx_queue_offload_capa;
 
 	dev_info->dev_capa =
@@ -2410,10 +2410,10 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
 				(uint64_t *)&link_speed);
 	switch (link_speed) {
 	case IFPGA_RAWDEV_LINK_SPEED_10GB:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case IFPGA_RAWDEV_LINK_SPEED_25GB:
-		link->link_speed = ETH_SPEED_NUM_25G;
+		link->link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
 		IPN3KE_AFU_PMD_ERR("Unknown link speed info %u", link_speed);
@@ -2471,9 +2471,9 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2529,9 +2529,9 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link.link_autoneg = !(rpst->ethdev->data->dev_conf.link_speeds &
-				ETH_LINK_SPEED_FIXED);
+				RTE_ETH_LINK_SPEED_FIXED);
 
 	rawdev = hw->rawdev;
 	ipn3ke_update_link(rawdev, rpst->port_id, &link);
@@ -2803,10 +2803,10 @@ ipn3ke_rpst_mtu_set(struct rte_eth_dev *ethdev, uint16_t mtu)
 
 	if (frame_size > IPN3KE_ETH_MAX_LEN)
 		dev_data->dev_conf.rxmode.offloads |=
-			(uint64_t)(DEV_RX_OFFLOAD_JUMBO_FRAME);
+			(uint64_t)(RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
 	else
 		dev_data->dev_conf.rxmode.offloads &=
-			(uint64_t)(~DEV_RX_OFFLOAD_JUMBO_FRAME);
+			(uint64_t)(~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME);
 
 	dev_data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
 
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index b5371568b54d..e425cea05aa8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1865,7 +1865,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	qinq &= IXGBE_DMATXCTL_GDV;
 
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (qinq) {
 			reg = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			reg = (reg & (~IXGBE_VLNCTRL_VET)) | (uint32_t)tpid;
@@ -1880,7 +1880,7 @@ ixgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				    " by single VLAN");
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (qinq) {
 			/* Only the high 16-bits is valid */
 			IXGBE_WRITE_REG(hw, IXGBE_EXVET, (uint32_t)tpid <<
@@ -1967,10 +1967,10 @@ ixgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -2091,7 +2091,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	if (hw->mac.type == ixgbe_mac_82598EB) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 			ctrl = IXGBE_READ_REG(hw, IXGBE_VLNCTRL);
 			ctrl |= IXGBE_VLNCTRL_VME;
 			IXGBE_WRITE_REG(hw, IXGBE_VLNCTRL, ctrl);
@@ -2108,7 +2108,7 @@ ixgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
 			ctrl = IXGBE_READ_REG(hw, IXGBE_RXDCTL(rxq->reg_idx));
-			if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+			if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 				ctrl |= IXGBE_RXDCTL_VME;
 				on = TRUE;
 			} else {
@@ -2130,17 +2130,17 @@ ixgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct ixgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -2151,19 +2151,18 @@ ixgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		ixgbe_vlan_hw_strip_config(dev);
-	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ixgbe_vlan_hw_filter_enable(dev);
 		else
 			ixgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			ixgbe_vlan_hw_extend_enable(dev);
 		else
 			ixgbe_vlan_hw_extend_disable(dev);
@@ -2202,10 +2201,10 @@ ixgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -2229,18 +2228,18 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (ixgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -2250,12 +2249,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_ONLY;
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -2264,12 +2263,12 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_ONLY;
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -2284,13 +2283,13 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2299,15 +2298,15 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != IXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -2316,39 +2315,39 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -2357,7 +2356,7 @@ ixgbe_check_mq_mode(struct rte_eth_dev *dev)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB) {
 			if (nb_tx_q > IXGBE_NONE_MODE_TX_NB_QUEUES) {
 				PMD_INIT_LOG(ERR,
@@ -2381,8 +2380,8 @@ ixgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multipe queue mode checking */
 	ret  = ixgbe_check_mq_mode(dev);
@@ -2627,15 +2626,15 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		ixgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -2712,17 +2711,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_2_5G |  ETH_LINK_SPEED_5G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_2_5G |  RTE_ETH_LINK_SPEED_5G |
+			RTE_ETH_LINK_SPEED_10G;
 		if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 				hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-			allowed_speeds = ETH_LINK_SPEED_10M |
-				ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+			allowed_speeds = RTE_ETH_LINK_SPEED_10M |
+				RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 		break;
 	default:
-		allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+		allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 	}
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
@@ -2736,7 +2735,7 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		switch (hw->mac.type) {
 		case ixgbe_mac_82598EB:
 			speed = IXGBE_LINK_SPEED_82598_AUTONEG;
@@ -2754,17 +2753,17 @@ ixgbe_dev_start(struct rte_eth_dev *dev)
 			speed = IXGBE_LINK_SPEED_82599_AUTONEG;
 		}
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= IXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= IXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= IXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= IXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= IXGBE_LINK_SPEED_100_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= IXGBE_LINK_SPEED_10_FULL;
 	}
 
@@ -3839,7 +3838,7 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 		 * When DCB/VT is off, maximum number of queues changes,
 		 * except for 82598EB, which remains constant.
 		 */
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_NONE &&
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_NONE &&
 				hw->mac.type != ixgbe_mac_82598EB)
 			dev_info->max_tx_queues = IXGBE_NONE_MODE_TX_NB_QUEUES;
 	}
@@ -3849,9 +3848,9 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->max_mtu =  dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
 	dev_info->min_mtu = RTE_ETHER_MIN_MTU;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
@@ -3890,21 +3889,21 @@ ixgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->reta_size = ixgbe_reta_size_get(hw->mac.type);
 	dev_info->flow_type_rss_offloads = IXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T ||
 			hw->device_id == IXGBE_DEV_ID_X550EM_A_1G_T_L)
-		dev_info->speed_capa = ETH_LINK_SPEED_10M |
-			ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G;
+		dev_info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G;
 
 	if (hw->mac.type == ixgbe_mac_X540 ||
 	    hw->mac.type == ixgbe_mac_X540_vf ||
 	    hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550_vf) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 	}
 	if (hw->mac.type == ixgbe_mac_X550) {
-		dev_info->speed_capa |= ETH_LINK_SPEED_2_5G;
-		dev_info->speed_capa |= ETH_LINK_SPEED_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_2_5G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_5G;
 	}
 
 	/* Driver-preferred Rx/Tx parameters */
@@ -3973,9 +3972,9 @@ ixgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		dev_info->max_vmdq_pools = ETH_16_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_16_POOLS;
 	else
-		dev_info->max_vmdq_pools = ETH_64_POOLS;
+		dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = ixgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (ixgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
@@ -4218,11 +4217,11 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	u32 esdp_reg;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 
 	hw->mac.get_link_status = true;
 
@@ -4244,8 +4243,8 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		diag = ixgbe_check_link(hw, &link_speed, &link_up, wait);
 
 	if (diag != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -4281,37 +4280,37 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case IXGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 		break;
 
 	case IXGBE_LINK_SPEED_10_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 
 	case IXGBE_LINK_SPEED_100_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case IXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case IXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case IXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -4528,7 +4527,7 @@ ixgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -4747,13 +4746,13 @@ ixgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -5051,8 +5050,8 @@ ixgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5099,8 +5098,8 @@ ixgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < reta_size; i += IXGBE_4_BIT_WIDTH) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) &
 						IXGBE_4_BIT_MASK);
 		if (!mask)
@@ -5199,11 +5198,11 @@ ixgbe_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 	/* switch to jumbo mode if needed */
 	if (frame_size > IXGBE_ETH_MAX_LEN) {
 		dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		hlreg0 |= IXGBE_HLREG0_JUMBOEN;
 	} else {
 		dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 		hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
 	}
 	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
@@ -5271,22 +5270,22 @@ ixgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_IXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -5346,8 +5345,8 @@ ixgbevf_dev_start(struct rte_eth_dev *dev)
 	ixgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = ixgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -5581,10 +5580,10 @@ ixgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			ixgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
@@ -5715,12 +5714,12 @@ ixgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			IXGBE_WRITE_REG(hw, IXGBE_UTA(i), 0);
 		}
@@ -5734,15 +5733,15 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= IXGBE_VMOLR_AUPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= IXGBE_VMOLR_ROMPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= IXGBE_VMOLR_ROPE;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= IXGBE_VMOLR_BAM;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= IXGBE_VMOLR_MPE;
 
 	return new_val;
@@ -5753,8 +5752,8 @@ ixgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 #define IXGBE_MRCTL_DPME  0x04 /* Downlink Port Mirroring. */
 #define IXGBE_MRCTL_VLME  0x08 /* VLAN Mirroring. */
 #define IXGBE_INVALID_MIRROR_TYPE(mirror_type) \
-	((mirror_type) & ~(uint8_t)(ETH_MIRROR_VIRTUAL_POOL_UP | \
-	ETH_MIRROR_UPLINK_PORT | ETH_MIRROR_DOWNLINK_PORT | ETH_MIRROR_VLAN))
+	((mirror_type) & ~(uint8_t)(RTE_ETH_MIRROR_VIRTUAL_POOL_UP | \
+	RTE_ETH_MIRROR_UPLINK_PORT | RTE_ETH_MIRROR_DOWNLINK_PORT | RTE_ETH_MIRROR_VLAN))
 
 static int
 ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
@@ -5794,7 +5793,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	if (mirror_conf->rule_type & ETH_MIRROR_VLAN) {
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) {
 		mirror_type |= IXGBE_MRCTL_VLME;
 		/* Check if vlan id is valid and find conresponding VLAN ID
 		 * index in VLVF
@@ -5827,7 +5826,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 
 			mr_info->mr_conf[rule_id].vlan.vlan_mask =
 						mirror_conf->vlan.vlan_mask;
-			for (i = 0; i < ETH_VMDQ_MAX_VLAN_FILTERS; i++) {
+			for (i = 0; i < RTE_ETH_VMDQ_MAX_VLAN_FILTERS; i++) {
 				if (mirror_conf->vlan.vlan_mask & (1ULL << i))
 					mr_info->mr_conf[rule_id].vlan.vlan_id[i] =
 						mirror_conf->vlan.vlan_id[i];
@@ -5836,7 +5835,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 			mv_lsb = 0;
 			mv_msb = 0;
 			mr_info->mr_conf[rule_id].vlan.vlan_mask = 0;
-			for (i = 0; i < ETH_VMDQ_MAX_VLAN_FILTERS; i++)
+			for (i = 0; i < RTE_ETH_VMDQ_MAX_VLAN_FILTERS; i++)
 				mr_info->mr_conf[rule_id].vlan.vlan_id[i] = 0;
 		}
 	}
@@ -5845,7 +5844,7 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 	 * if enable pool mirror, write related pool mask register,if disable
 	 * pool mirror, clear PFMRVM register
 	 */
-	if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) {
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_VIRTUAL_POOL_UP) {
 		mirror_type |= IXGBE_MRCTL_VPME;
 		if (on) {
 			mp_lsb = mirror_conf->pool_mask & 0xFFFFFFFF;
@@ -5859,9 +5858,9 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 			mr_info->mr_conf[rule_id].pool_mask = 0;
 		}
 	}
-	if (mirror_conf->rule_type & ETH_MIRROR_UPLINK_PORT)
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_UPLINK_PORT)
 		mirror_type |= IXGBE_MRCTL_UPME;
-	if (mirror_conf->rule_type & ETH_MIRROR_DOWNLINK_PORT)
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_DOWNLINK_PORT)
 		mirror_type |= IXGBE_MRCTL_DPME;
 
 	/* read  mirror control register and recalculate it */
@@ -5882,13 +5881,13 @@ ixgbe_mirror_rule_set(struct rte_eth_dev *dev,
 	IXGBE_WRITE_REG(hw, IXGBE_MRCTL(rule_id), mr_ctl);
 
 	/* write pool mirrror control  register */
-	if (mirror_conf->rule_type & ETH_MIRROR_VIRTUAL_POOL_UP) {
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_VIRTUAL_POOL_UP) {
 		IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id), mp_lsb);
 		IXGBE_WRITE_REG(hw, IXGBE_VMRVM(rule_id + rule_mr_offset),
 				mp_msb);
 	}
 	/* write VLAN mirrror control  register */
-	if (mirror_conf->rule_type & ETH_MIRROR_VLAN) {
+	if (mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) {
 		IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id), mv_lsb);
 		IXGBE_WRITE_REG(hw, IXGBE_VMRVLAN(rule_id + rule_mr_offset),
 				mv_msb);
@@ -6266,8 +6265,8 @@ ixgbe_set_queue_rate_limit(struct rte_eth_dev *dev,
 	 * register. MMW_SIZE=0x014 if 9728-byte jumbo is supported, otherwise
 	 * set as 0x4.
 	 */
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) &&
-	    (rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE))
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) &&
+	    rxmode->max_rx_pkt_len >= IXGBE_MAX_JUMBO_FRAME_SIZE)
 		IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRM,
 			IXGBE_MMW_SIZE_JUMBO_FRAME);
 	else
@@ -6942,15 +6941,15 @@ ixgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = IXGBE_INCVAL_100;
 		shift = IXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = IXGBE_INCVAL_1GB;
 		shift = IXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = IXGBE_INCVAL_10GB;
 		shift = IXGBE_INCVAL_SHIFT_10GB;
@@ -7361,16 +7360,16 @@ ixgbe_reta_size_get(enum ixgbe_mac_type mac_type) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		return ETH_RSS_RETA_SIZE_512;
+		return RTE_ETH_RSS_RETA_SIZE_512;
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
-		return ETH_RSS_RETA_SIZE_64;
+		return RTE_ETH_RSS_RETA_SIZE_64;
 	case ixgbe_mac_X540_vf:
 	case ixgbe_mac_82599_vf:
 		return 0;
 	default:
-		return ETH_RSS_RETA_SIZE_128;
+		return RTE_ETH_RSS_RETA_SIZE_128;
 	}
 }
 
@@ -7380,10 +7379,10 @@ ixgbe_reta_reg_get(enum ixgbe_mac_type mac_type, uint16_t reta_idx) {
 	case ixgbe_mac_X550:
 	case ixgbe_mac_X550EM_x:
 	case ixgbe_mac_X550EM_a:
-		if (reta_idx < ETH_RSS_RETA_SIZE_128)
+		if (reta_idx < RTE_ETH_RSS_RETA_SIZE_128)
 			return IXGBE_RETA(reta_idx >> 2);
 		else
-			return IXGBE_ERETA((reta_idx - ETH_RSS_RETA_SIZE_128) >> 2);
+			return IXGBE_ERETA((reta_idx - RTE_ETH_RSS_RETA_SIZE_128) >> 2);
 	case ixgbe_mac_X550_vf:
 	case ixgbe_mac_X550EM_x_vf:
 	case ixgbe_mac_X550EM_a_vf:
@@ -7439,7 +7438,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -7450,7 +7449,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled*/
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -7474,9 +7473,9 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled*/
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7489,7 +7488,7 @@ ixgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -7742,7 +7741,7 @@ ixgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -7774,7 +7773,7 @@ ixgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = ixgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -7871,12 +7870,12 @@ ixgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_add_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
 
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
@@ -7908,11 +7907,11 @@ ixgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		ret = ixgbe_del_vxlan_port(hw, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		PMD_DRV_LOG(ERR, "Tunnel type is not supported now.");
 		ret = -EINVAL;
 		break;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h
index a0ce18ca246b..3443154589e8 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.h
+++ b/drivers/net/ixgbe/ixgbe_ethdev.h
@@ -113,15 +113,15 @@
 #define IXGBE_FDIR_NVGRE_TUNNEL_TYPE    0x0
 
 #define IXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define IXGBE_VF_IRQ_ENABLE_MASK        3          /* vf irq enable mask */
 #define IXGBE_VF_MAXMSIVECTOR           1
diff --git a/drivers/net/ixgbe/ixgbe_fdir.c b/drivers/net/ixgbe/ixgbe_fdir.c
index 27a49bbce5e7..7894047829a8 100644
--- a/drivers/net/ixgbe/ixgbe_fdir.c
+++ b/drivers/net/ixgbe/ixgbe_fdir.c
@@ -90,9 +90,9 @@ static int fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl);
 static uint32_t ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 				 uint32_t key);
 static uint32_t atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static uint32_t atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc);
+		enum rte_eth_fdir_pballoc_type pballoc);
 static int fdir_write_perfect_filter_82599(struct ixgbe_hw *hw,
 			union ixgbe_atr_input *input, uint8_t queue,
 			uint32_t fdircmd, uint32_t fdirhash,
@@ -163,20 +163,20 @@ fdir_enable_82599(struct ixgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf, uint32_t *fdirctrl)
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf, uint32_t *fdirctrl)
 {
 	*fdirctrl = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= IXGBE_FDIRCTRL_PBALLOC_256K;
 		break;
@@ -807,13 +807,13 @@ ixgbe_atr_compute_hash_82599(union ixgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		return ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				PERFECT_BUCKET_128KB_HASH_MASK;
@@ -850,15 +850,15 @@ ixgbe_fdir_check_cmd_complete(struct ixgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_sig_hash_82599(union ixgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash = ixgbe_atr_compute_hash_82599(input,
 				IXGBE_ATR_BUCKET_HASH_KEY) &
 				SIG_BUCKET_128KB_HASH_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_flow.c b/drivers/net/ixgbe/ixgbe_flow.c
index 511b612f7fe4..0557de6c1aa5 100644
--- a/drivers/net/ixgbe/ixgbe_flow.c
+++ b/drivers/net/ixgbe/ixgbe_flow.c
@@ -1259,7 +1259,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index e45c5501e6bf..944c9f23809e 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -392,7 +392,7 @@ ixgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -400,7 +400,7 @@ ixgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -633,11 +633,11 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -657,7 +657,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= IXGBE_HLREG0_TXCRCEN | IXGBE_HLREG0_RXCRCSTRP;
 	IXGBE_WRITE_REG(hw, IXGBE_HLREG0, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECRXCTRL, 0);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECRXCTRL);
 		if (reg != 0) {
@@ -665,7 +665,7 @@ ixgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		IXGBE_WRITE_REG(hw, IXGBE_SECTXCTRL,
 				IXGBE_SECTXCTRL_STORE_FORWARD);
 		reg = IXGBE_READ_REG(hw, IXGBE_SECTXCTRL);
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index fbf2b17d160f..d03238b728ba 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -107,15 +107,15 @@ int ixgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct ixgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -266,15 +266,15 @@ int ixgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gpie |= IXGBE_GPIE_MSIX_MODE | IXGBE_GPIE_PBA_SUPPORT;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_64;
 		gpie |= IXGBE_GPIE_VTMODE_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_32;
 		gpie |= IXGBE_GPIE_VTMODE_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= IXGBE_GCR_EXT_VT_MODE_16;
 		gpie |= IXGBE_GPIE_VTMODE_16;
 		break;
@@ -604,11 +604,11 @@ ixgbe_set_vf_lpe(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 		hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
 		if (max_frame > IXGBE_ETH_MAX_LEN) {
 			dev->data->dev_conf.rxmode.offloads |=
-				DEV_RX_OFFLOAD_JUMBO_FRAME;
+				RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			hlreg0 |= IXGBE_HLREG0_JUMBOEN;
 		} else {
 			dev->data->dev_conf.rxmode.offloads &=
-				~DEV_RX_OFFLOAD_JUMBO_FRAME;
+				~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			hlreg0 &= ~IXGBE_HLREG0_JUMBOEN;
 		}
 		IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
@@ -684,29 +684,29 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 		vmvir = IXGBE_READ_REG(hw, IXGBE_VMVIR(vf));
 		vlana = vmvir & IXGBE_VMVIR_VLANA_MASK;
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c
index c814a28cb49a..4e712c2b5e61 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.c
+++ b/drivers/net/ixgbe/ixgbe_rxtx.c
@@ -2591,26 +2591,26 @@ ixgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2778,7 +2778,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/*
@@ -3014,7 +3014,7 @@ ixgbe_get_rx_queue_offloads(struct rte_eth_dev *dev)
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	if (hw->mac.type != ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return offloads;
 }
@@ -3025,20 +3025,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t offloads;
 	struct ixgbe_hw *hw = IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_JUMBO_FRAME |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_SCATTER |
-		   DEV_RX_OFFLOAD_RSS_HASH;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_SCATTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hw->mac.type == ixgbe_mac_82598EB)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (ixgbe_is_vf(dev) == 0)
-		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 
 	/*
 	 * RSC is only supported by 82599 and x540 PF devices in a non-SR-IOV
@@ -3048,20 +3048,20 @@ ixgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	     hw->mac.type == ixgbe_mac_X540 ||
 	     hw->mac.type == ixgbe_mac_X550) &&
 	    !RTE_ETH_DEV_SRIOV(dev).active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
 	if (hw->mac.type == ixgbe_mac_X550 ||
 	    hw->mac.type == ixgbe_mac_X550EM_x ||
 	    hw->mac.type == ixgbe_mac_X550EM_a)
-		offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -3116,7 +3116,7 @@ ixgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -3520,23 +3520,23 @@ ixgbe_hw_rss_hash_set(struct ixgbe_hw *hw, struct rte_eth_rss_conf *rss_conf)
 	/* Set configured hashing protocols in MRQC register */
 	rss_hf = rss_conf->rss_hf;
 	mrqc = IXGBE_MRQC_RSSEN; /* Enable RSS */
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_TCP;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6;
-	if (rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_TCP;
-	if (rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV4_UDP;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_UDP;
-	if (rss_hf & ETH_RSS_IPV6_UDP_EX)
+	if (rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 		mrqc |= IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP;
 	IXGBE_WRITE_REG(hw, mrqc_reg, mrqc);
 }
@@ -3618,23 +3618,23 @@ ixgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	}
 	rss_hf = 0;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_TCP)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 	if (mrqc & IXGBE_MRQC_RSS_FIELD_IPV6_EX_UDP)
-		rss_hf |= ETH_RSS_IPV6_UDP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_UDP_EX;
 	rss_conf->rss_hf = rss_hf;
 	return 0;
 }
@@ -3710,12 +3710,12 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		ixgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * RXPBSIZE
@@ -3740,7 +3740,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = IXGBE_READ_REG(hw, IXGBE_RXPBSIZE(i));
 
 		rxpbsize &= (~(0x3FF << IXGBE_RXPBSIZE_SHIFT));
@@ -3749,7 +3749,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	}
 
 	/* MRQC: enable vmdq and dcb */
-	mrqc = (num_pools == ETH_16_POOLS) ?
+	mrqc = (num_pools == RTE_ETH_16_POOLS) ?
 		IXGBE_MRQC_VMDQRT8TCEN : IXGBE_MRQC_VMDQRT4TCEN;
 	IXGBE_WRITE_REG(hw, IXGBE_MRQC, mrqc);
 
@@ -3765,7 +3765,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* RTRUP2TC: mapping user priorities to traffic classes (TCs) */
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3789,7 +3789,7 @@ ixgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 16 or 32 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*
 	 * MPSAR - allow pools to read specific mac addresses
@@ -3871,7 +3871,7 @@ ixgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	if (hw->mac.type != ixgbe_mac_82598EB)
 		/*PF VF Transmit Enable*/
 		IXGBE_WRITE_REG(hw, IXGBE_VFTE(0),
-			vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	ixgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3887,12 +3887,12 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3902,7 +3902,7 @@ ixgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3920,12 +3920,12 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct ixgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3935,7 +3935,7 @@ ixgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3962,7 +3962,7 @@ ixgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3989,7 +3989,7 @@ ixgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[IXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -4158,7 +4158,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		if (hw->mac.type != ixgbe_mac_82598EB) {
 			config_dcb_rx = DCB_RX_CONFIG;
@@ -4171,8 +4171,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			ixgbe_vmdq_dcb_configure(dev);
 		}
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -4185,7 +4185,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -4196,7 +4196,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		ixgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/*get DCB TX configuration parameters from rte_eth_conf*/
@@ -4212,15 +4212,15 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	ixgbe_dcb_unpack_map_cee(dcb_config, IXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < IXGBE_DCB_MAX_TRAFFIC_CLASS); i++) {
-			if ((mask & 0x1) && (j < ETH_DCB_NUM_USER_PRIORITIES))
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -4270,9 +4270,8 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), rxpbsize);
 		}
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			IXGBE_WRITE_REG(hw, IXGBE_RXPBSIZE(i), 0);
-		}
 	}
 	if (config_dcb_tx) {
 		/* Only support an equally distributed
@@ -4286,7 +4285,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), 0);
 			IXGBE_WRITE_REG(hw, IXGBE_TXPBTHRESH(i), 0);
 		}
@@ -4322,7 +4321,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	ixgbe_dcb_config_tc_stats_82599(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/*
@@ -4336,7 +4335,7 @@ ixgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = ixgbe_dcb_pfc_enabled;
 		}
 		ixgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = ixgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -4357,12 +4356,12 @@ void ixgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if ((dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB) &&
-	    (dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS))
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -4418,7 +4417,7 @@ ixgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* VFRE: pool enabling for receive - 64 */
 	IXGBE_WRITE_REG(hw, IXGBE_VFRE(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		IXGBE_WRITE_REG(hw, IXGBE_VFRE(1), UINT32_MAX);
 
 	/*
@@ -4539,11 +4538,11 @@ ixgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = IXGBE_READ_REG(hw, IXGBE_MRQC);
 	mrqc &= ~IXGBE_MRQC_MRQE_MASK;
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS64EN;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= IXGBE_MRQC_VMDQRSS32EN;
 		break;
 
@@ -4564,17 +4563,17 @@ ixgbe_config_vf_default(struct rte_eth_dev *dev)
 		IXGBE_DEV_PRIVATE_TO_HW(dev->data->dev_private);
 
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQEN);
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT4TCEN);
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		IXGBE_WRITE_REG(hw, IXGBE_MRQC,
 			IXGBE_MRQC_VMDQRT8TCEN);
 		break;
@@ -4601,21 +4600,21 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			ixgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			ixgbe_rss_disable(dev);
@@ -4626,18 +4625,18 @@ ixgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			ixgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			ixgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4671,7 +4670,7 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			ixgbe_vmdq_tx_hw_configure(hw);
 		else {
 			mtqc = IXGBE_MTQC_64Q_1PB;
@@ -4684,13 +4683,13 @@ ixgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_64VF;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_32VF;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = IXGBE_MTQC_VT_ENA | IXGBE_MTQC_RT_ENA |
 				IXGBE_MTQC_8TC_8TQ;
 			break;
@@ -4898,7 +4897,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev)
 		rxq->rx_using_sse = rx_using_sse;
 #ifdef RTE_LIB_SECURITY
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 #endif
 	}
 }
@@ -4926,10 +4925,10 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4937,8 +4936,8 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration (chapter 4.6.7.2.1 of 82599 Spec) */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		/*
 		 * According to chapter of 4.6.7.2.1 of the Spec Rev.
 		 * 3.0 RSC configuration requires HW CRC stripping being
@@ -4952,7 +4951,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RFCTL configuration  */
 	rfctl = IXGBE_READ_REG(hw, IXGBE_RFCTL);
-	if ((rsc_capable) && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if ((rsc_capable) && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~IXGBE_RFCTL_RSC_DIS;
 	else
 		rfctl |= IXGBE_RFCTL_RSC_DIS;
@@ -4961,7 +4960,7 @@ ixgbe_set_rsc(struct rte_eth_dev *dev)
 	IXGBE_WRITE_REG(hw, IXGBE_RFCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set RDRXCTL.RSCACKC bit */
@@ -5082,7 +5081,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~IXGBE_HLREG0_RXCRCSTRP;
 	else
 		hlreg0 |= IXGBE_HLREG0_RXCRCSTRP;
@@ -5090,7 +5089,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	/*
 	 * Configure jumbo frame support, if any.
 	 */
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		hlreg0 |= IXGBE_HLREG0_JUMBOEN;
 		maxfrs = IXGBE_READ_REG(hw, IXGBE_MAXFRS);
 		maxfrs &= 0x0000FFFF;
@@ -5119,7 +5118,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5128,7 +5127,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -5171,11 +5170,11 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 		if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
 					    2 * IXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -5190,7 +5189,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = IXGBE_READ_REG(hw, IXGBE_RXCSUM);
 	rxcsum |= IXGBE_RXCSUM_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= IXGBE_RXCSUM_IPPCSE;
 	else
 		rxcsum &= ~IXGBE_RXCSUM_IPPCSE;
@@ -5200,7 +5199,7 @@ ixgbe_dev_rx_init(struct rte_eth_dev *dev)
 	if (hw->mac.type == ixgbe_mac_82599EB ||
 	    hw->mac.type == ixgbe_mac_X540) {
 		rdrxctl = IXGBE_READ_REG(hw, IXGBE_RDRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~IXGBE_RDRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= IXGBE_RDRXCTL_CRCSTRIP;
@@ -5406,9 +5405,9 @@ ixgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 
 #ifdef RTE_LIB_SECURITY
 	if ((dev->data->dev_conf.rxmode.offloads &
-			DEV_RX_OFFLOAD_SECURITY) ||
+			RTE_ETH_RX_OFFLOAD_SECURITY) ||
 		(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY)) {
+			RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = ixgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -5696,7 +5695,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
@@ -5745,7 +5744,7 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		buf_size = (uint16_t) ((srrctl & IXGBE_SRRCTL_BSIZEPKT_MASK) <<
 				       IXGBE_SRRCTL_BSIZEPKT_SHIFT);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (rxmode->max_rx_pkt_len +
 				2 * IXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -5754,8 +5753,8 @@ ixgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/* Set RQPL for VF RSS according to max Rx queue */
diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h
index 476ef62cfda2..220efffe4d08 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx.h
@@ -133,7 +133,7 @@ struct ixgbe_rx_queue {
 	uint8_t             rx_udp_csum_zero_err;
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -226,7 +226,7 @@ struct ixgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t offloads; /**< Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t offloads; /**< Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct ixgbe_advctx_info ctx_cache[IXGBE_CTX_NUM];
diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
index adba855ca30f..714707941537 100644
--- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
+++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h
@@ -278,7 +278,7 @@ static inline int
 ixgbe_rx_vec_dev_conf_condition_check_default(struct rte_eth_dev *dev)
 {
 #ifndef RTE_LIBRTE_IEEE1588
-	struct rte_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fconf = &dev->data->dev_conf.fdir_conf;
 
 	/* no fdir support */
 	if (fconf->mode != RTE_FDIR_MODE_NONE)
diff --git a/drivers/net/ixgbe/ixgbe_tm.c b/drivers/net/ixgbe/ixgbe_tm.c
index a8407e742e6d..c2ab3131f22e 100644
--- a/drivers/net/ixgbe/ixgbe_tm.c
+++ b/drivers/net/ixgbe/ixgbe_tm.c
@@ -119,14 +119,14 @@ ixgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -375,10 +375,10 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -392,7 +392,7 @@ ixgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index d5b636a19408..536e33010703 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -58,20 +58,20 @@ ixgbe_vf_representor_dev_infos_get(struct rte_eth_dev *ethdev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	/**< Maximum number of MAC addresses. */
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-		DEV_RX_OFFLOAD_IPV4_CKSUM |	DEV_RX_OFFLOAD_UDP_CKSUM  |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	RTE_ETH_RX_OFFLOAD_UDP_CKSUM  |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 	/**< Device RX offload capabilities. */
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM | DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM | DEV_TX_OFFLOAD_SCTP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_TSO | DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM | RTE_ETH_TX_OFFLOAD_SCTP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO | RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	/**< Device TX offload capabilities. */
 
 	dev_info->speed_capa =
 		representor->pf_ethdev->data->dev_link.link_speed;
-	/**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	/**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 
 	dev_info->switch_info.name =
 		representor->pf_ethdev->device->name;
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index cf089cd9aee5..9729f8575f53 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -303,10 +303,10 @@ rte_pmd_ixgbe_set_vf_vlan_stripq(uint16_t port, uint16_t vf, uint8_t on)
 	 */
 	if (hw->mac.type == ixgbe_mac_82598EB)
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_16_POOLS;
+				  RTE_ETH_16_POOLS;
 	else
 		queues_per_pool = (uint16_t)hw->mac.max_rx_queues /
-				  ETH_64_POOLS;
+				  RTE_ETH_64_POOLS;
 
 	for (q = 0; q < queues_per_pool; q++)
 		(*dev->dev_ops->vlan_strip_queue_set)(dev,
@@ -736,14 +736,14 @@ rte_pmd_ixgbe_set_tc_bw_alloc(uint16_t port,
 	bw_conf = IXGBE_DEV_PRIVATE_TO_BW_CONF(dev->data->dev_private);
 	eth_conf = &dev->data->dev_conf;
 
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.h b/drivers/net/ixgbe/rte_pmd_ixgbe.h
index 90fc8160b1f8..eef6f6661c74 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.h
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.h
@@ -285,8 +285,8 @@ int rte_pmd_ixgbe_macsec_select_rxsa(uint16_t port, uint8_t idx, uint8_t an,
 * @param rx_mask
 *    The RX mode mask, which is one or more of accepting Untagged Packets,
 *    packets that match the PFUTA table, Broadcast and Multicast Promiscuous.
-*    ETH_VMDQ_ACCEPT_UNTAG,ETH_VMDQ_ACCEPT_HASH_UC,
-*    ETH_VMDQ_ACCEPT_BROADCAST and ETH_VMDQ_ACCEPT_MULTICAST will be used
+*    RTE_ETH_VMDQ_ACCEPT_UNTAG, RTE_ETH_VMDQ_ACCEPT_HASH_UC,
+*    RTE_ETH_VMDQ_ACCEPT_BROADCAST and RTE_ETH_VMDQ_ACCEPT_MULTICAST will be used
 *    in rx_mode.
 * @param on
 *    1 - Enable a VF RX mode.
diff --git a/drivers/net/kni/rte_eth_kni.c b/drivers/net/kni/rte_eth_kni.c
index 871d11c4133d..29060ca76f93 100644
--- a/drivers/net/kni/rte_eth_kni.c
+++ b/drivers/net/kni/rte_eth_kni.c
@@ -61,10 +61,10 @@ struct pmd_internals {
 };
 
 static const struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 static int is_kni_initialized;
 
diff --git a/drivers/net/liquidio/lio_ethdev.c b/drivers/net/liquidio/lio_ethdev.c
index b72060a4499b..e91f4c13c63b 100644
--- a/drivers/net/liquidio/lio_ethdev.c
+++ b/drivers/net/liquidio/lio_ethdev.c
@@ -384,15 +384,15 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPN3:
 	case PCI_SUBSYS_DEV_ID_CN2350_210SVPT:
 	case PCI_SUBSYS_DEV_ID_CN2360_210SVPT:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		break;
 	/* CN23xx 25G cards */
 	case PCI_SUBSYS_DEV_ID_CN2350_225:
 	case PCI_SUBSYS_DEV_ID_CN2360_225:
-		devinfo->speed_capa = ETH_LINK_SPEED_25G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_25G;
 		break;
 	default:
-		devinfo->speed_capa = ETH_LINK_SPEED_10G;
+		devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 		lio_dev_err(lio_dev,
 			    "Unknown CN23XX subsystem device id. Setting 10G as default link speed.\n");
 		return -EINVAL;
@@ -406,27 +406,27 @@ lio_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	devinfo->max_mac_addrs = 1;
 
-	devinfo->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_RX_OFFLOAD_UDP_CKSUM		|
-				    DEV_RX_OFFLOAD_TCP_CKSUM		|
-				    DEV_RX_OFFLOAD_VLAN_STRIP		|
-				    DEV_RX_OFFLOAD_RSS_HASH);
-	devinfo->tx_offload_capa = (DEV_TX_OFFLOAD_IPV4_CKSUM		|
-				    DEV_TX_OFFLOAD_UDP_CKSUM		|
-				    DEV_TX_OFFLOAD_TCP_CKSUM		|
-				    DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM);
+	devinfo->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_RX_OFFLOAD_VLAN_STRIP		|
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH);
+	devinfo->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM		|
+				    RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM);
 
 	devinfo->rx_desc_lim = lio_rx_desc_lim;
 	devinfo->tx_desc_lim = lio_tx_desc_lim;
 
 	devinfo->reta_size = LIO_RSS_MAX_TABLE_SZ;
 	devinfo->hash_key_size = LIO_RSS_MAX_KEY_SZ;
-	devinfo->flow_type_rss_offloads = (ETH_RSS_IPV4			|
-					   ETH_RSS_NONFRAG_IPV4_TCP	|
-					   ETH_RSS_IPV6			|
-					   ETH_RSS_NONFRAG_IPV6_TCP	|
-					   ETH_RSS_IPV6_EX		|
-					   ETH_RSS_IPV6_TCP_EX);
+	devinfo->flow_type_rss_offloads = (RTE_ETH_RSS_IPV4			|
+					   RTE_ETH_RSS_NONFRAG_IPV4_TCP	|
+					   RTE_ETH_RSS_IPV6			|
+					   RTE_ETH_RSS_NONFRAG_IPV6_TCP	|
+					   RTE_ETH_RSS_IPV6_EX		|
+					   RTE_ETH_RSS_IPV6_TCP_EX);
 	return 0;
 }
 
@@ -483,10 +483,10 @@ lio_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 
 	if (frame_len > LIO_ETH_MAX_LEN)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_JUMBO_FRAME;
+			RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
 		eth_dev->data->dev_conf.rxmode.offloads &=
-			~DEV_RX_OFFLOAD_JUMBO_FRAME;
+			~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	eth_dev->data->dev_conf.rxmode.max_rx_pkt_len = frame_len;
 	eth_dev->data->mtu = mtu;
@@ -540,10 +540,10 @@ lio_dev_rss_reta_update(struct rte_eth_dev *eth_dev,
 	rss_param->param.flags &= ~LIO_RSS_PARAM_ITABLE_UNCHANGED;
 	rss_param->param.itablesize = LIO_RSS_MAX_TABLE_SZ;
 
-	for (i = 0; i < (reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask) & ((uint64_t)1 << j)) {
-				index = (i * RTE_RETA_GROUP_SIZE) + j;
+				index = (i * RTE_ETH_RETA_GROUP_SIZE) + j;
 				rss_state->itable[index] = reta_conf[i].reta[j];
 			}
 		}
@@ -583,12 +583,12 @@ lio_dev_rss_reta_query(struct rte_eth_dev *eth_dev,
 		return -EINVAL;
 	}
 
-	num = reta_size / RTE_RETA_GROUP_SIZE;
+	num = reta_size / RTE_ETH_RETA_GROUP_SIZE;
 
 	for (i = 0; i < num; i++) {
 		memcpy(reta_conf->reta,
-		       &rss_state->itable[i * RTE_RETA_GROUP_SIZE],
-		       RTE_RETA_GROUP_SIZE);
+		       &rss_state->itable[i * RTE_ETH_RETA_GROUP_SIZE],
+		       RTE_ETH_RETA_GROUP_SIZE);
 		reta_conf++;
 	}
 
@@ -616,17 +616,17 @@ lio_dev_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
 		memcpy(hash_key, rss_state->hash_key, rss_state->hash_key_size);
 
 	if (rss_state->ip)
-		rss_hf |= ETH_RSS_IPV4;
+		rss_hf |= RTE_ETH_RSS_IPV4;
 	if (rss_state->tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	if (rss_state->ipv6)
-		rss_hf |= ETH_RSS_IPV6;
+		rss_hf |= RTE_ETH_RSS_IPV6;
 	if (rss_state->ipv6_tcp_hash)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (rss_state->ipv6_ex)
-		rss_hf |= ETH_RSS_IPV6_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_EX;
 	if (rss_state->ipv6_tcp_ex_hash)
-		rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	rss_conf->rss_hf = rss_hf;
 
@@ -694,42 +694,42 @@ lio_dev_rss_hash_update(struct rte_eth_dev *eth_dev,
 		if (rss_state->hash_disable)
 			return -EINVAL;
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 			hashinfo |= LIO_RSS_HASH_IPV4;
 			rss_state->ip = 1;
 		} else {
 			rss_state->ip = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV4;
 			rss_state->tcp_hash = 1;
 		} else {
 			rss_state->tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6) {
 			hashinfo |= LIO_RSS_HASH_IPV6;
 			rss_state->ipv6 = 1;
 		} else {
 			rss_state->ipv6 = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6;
 			rss_state->ipv6_tcp_hash = 1;
 		} else {
 			rss_state->ipv6_tcp_hash = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX) {
 			hashinfo |= LIO_RSS_HASH_IPV6_EX;
 			rss_state->ipv6_ex = 1;
 		} else {
 			rss_state->ipv6_ex = 0;
 		}
 
-		if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX) {
+		if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX) {
 			hashinfo |= LIO_RSS_HASH_TCP_IPV6_EX;
 			rss_state->ipv6_tcp_ex_hash = 1;
 		} else {
@@ -778,7 +778,7 @@ lio_dev_udp_tunnel_add(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -835,7 +835,7 @@ lio_dev_udp_tunnel_del(struct rte_eth_dev *eth_dev,
 	if (udp_tnl == NULL)
 		return -EINVAL;
 
-	if (udp_tnl->prot_type != RTE_TUNNEL_TYPE_VXLAN) {
+	if (udp_tnl->prot_type != RTE_ETH_TUNNEL_TYPE_VXLAN) {
 		lio_dev_err(lio_dev, "Unsupported tunnel type\n");
 		return -1;
 	}
@@ -933,10 +933,10 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 
 	/* Initialize */
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	/* Return what we found */
 	if (lio_dev->linfo.link.s.link_up == 0) {
@@ -944,18 +944,18 @@ lio_dev_link_update(struct rte_eth_dev *eth_dev,
 		return rte_eth_linkstatus_set(eth_dev, &link);
 	}
 
-	link.link_status = ETH_LINK_UP; /* Interface is up */
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP; /* Interface is up */
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	switch (lio_dev->linfo.link.s.speed) {
 	case LIO_LINK_SPEED_10000:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case LIO_LINK_SPEED_25000:
-		link.link_speed = ETH_SPEED_NUM_25G;
+		link.link_speed = RTE_ETH_SPEED_NUM_25G;
 		break;
 	default:
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_HALF_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	}
 
 	return rte_eth_linkstatus_set(eth_dev, &link);
@@ -1107,8 +1107,8 @@ lio_dev_rss_configure(struct rte_eth_dev *eth_dev)
 
 		q_idx = (uint8_t)((eth_dev->data->nb_rx_queues > 1) ?
 				  i % eth_dev->data->nb_rx_queues : 0);
-		conf_idx = i / RTE_RETA_GROUP_SIZE;
-		reta_idx = i % RTE_RETA_GROUP_SIZE;
+		conf_idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_idx = i % RTE_ETH_RETA_GROUP_SIZE;
 		reta_conf[conf_idx].reta[reta_idx] = q_idx;
 		reta_conf[conf_idx].mask |= ((uint64_t)1 << reta_idx);
 	}
@@ -1124,10 +1124,10 @@ lio_dev_mq_rx_configure(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rss_conf rss_conf;
 
 	switch (eth_dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		lio_dev_rss_configure(eth_dev);
 		break;
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 	/* if mq_mode is none, disable rss mode. */
 	default:
 		memset(&rss_conf, 0, sizeof(rss_conf));
@@ -1509,7 +1509,7 @@ lio_dev_set_link_up(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 1;
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -1530,11 +1530,11 @@ lio_dev_set_link_down(struct rte_eth_dev *eth_dev)
 	}
 
 	lio_dev->linfo.link.s.link_up = 0;
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (lio_send_rx_ctrl_cmd(eth_dev, 0)) {
 		lio_dev->linfo.link.s.link_up = 1;
-		eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+		eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		lio_dev_err(lio_dev, "Unable to set Link Down\n");
 		return -1;
 	}
@@ -1746,9 +1746,9 @@ lio_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
 		eth_dev->data->dev_conf.rxmode.offloads |=
-			DEV_RX_OFFLOAD_RSS_HASH;
+			RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Inform firmware about change in number of queues to use.
 	 * Disable IO queues and reset registers for re-configuration.
diff --git a/drivers/net/memif/memif_socket.c b/drivers/net/memif/memif_socket.c
index f58ff4c0cb77..a117a05228fc 100644
--- a/drivers/net/memif/memif_socket.c
+++ b/drivers/net/memif/memif_socket.c
@@ -525,7 +525,7 @@ memif_disconnect(struct rte_eth_dev *dev)
 	int i;
 	int ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 	pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTED;
 
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index de6becd45e3e..ea66f5bfd452 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -55,10 +55,10 @@ static const char * const valid_arguments[] = {
 };
 
 static const struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_AUTONEG
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_AUTONEG
 };
 
 #define MEMIF_MP_SEND_REGION		"memif_mp_send_region"
@@ -1216,7 +1216,7 @@ memif_connect(struct rte_eth_dev *dev)
 
 		pmd->flags &= ~ETH_MEMIF_FLAG_CONNECTING;
 		pmd->flags |= ETH_MEMIF_FLAG_CONNECTED;
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	}
 	MIF_LOG(INFO, "Connected.");
 	return 0;
@@ -1367,10 +1367,10 @@ memif_link_update(struct rte_eth_dev *dev,
 
 	if (rte_eal_process_type() == RTE_PROC_SECONDARY) {
 		proc_private = dev->process_private;
-		if (dev->data->dev_link.link_status == ETH_LINK_UP &&
+		if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP &&
 				proc_private->regions_num == 0) {
 			memif_mp_request_regions(dev);
-		} else if (dev->data->dev_link.link_status == ETH_LINK_DOWN &&
+		} else if (dev->data->dev_link.link_status == RTE_ETH_LINK_DOWN &&
 				proc_private->regions_num > 0) {
 			memif_free_regions(dev);
 		}
diff --git a/drivers/net/mlx4/mlx4_ethdev.c b/drivers/net/mlx4/mlx4_ethdev.c
index 783ff94dce8d..d606ec8ca76d 100644
--- a/drivers/net/mlx4/mlx4_ethdev.c
+++ b/drivers/net/mlx4/mlx4_ethdev.c
@@ -657,11 +657,11 @@ mlx4_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->if_index = priv->if_index;
 	info->hash_key_size = MLX4_RSS_HASH_KEY_SIZE;
 	info->speed_capa =
-			ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_20G |
-			ETH_LINK_SPEED_40G |
-			ETH_LINK_SPEED_56G;
+			RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_20G |
+			RTE_ETH_LINK_SPEED_40G |
+			RTE_ETH_LINK_SPEED_56G;
 	info->flow_type_rss_offloads = mlx4_conv_rss_types(priv, 0, 1);
 
 	return 0;
@@ -821,13 +821,13 @@ mlx4_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		dev_link.link_speed = link_speed;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	dev->data->dev_link = dev_link;
 	return 0;
 }
@@ -863,13 +863,13 @@ mlx4_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	ret = 0;
 out:
 	MLX4_ASSERT(ret >= 0);
@@ -899,13 +899,13 @@ mlx4_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx4/mlx4_flow.c b/drivers/net/mlx4/mlx4_flow.c
index 71ea91b3fb82..2e1b6c87e983 100644
--- a/drivers/net/mlx4/mlx4_flow.c
+++ b/drivers/net/mlx4/mlx4_flow.c
@@ -109,21 +109,21 @@ mlx4_conv_rss_types(struct mlx4_priv *priv, uint64_t types, int verbs_to_dpdk)
 	};
 	static const uint64_t dpdk[] = {
 		[INNER] = 0,
-		[IPV4] = ETH_RSS_IPV4,
-		[IPV4_1] = ETH_RSS_FRAG_IPV4,
-		[IPV4_2] = ETH_RSS_NONFRAG_IPV4_OTHER,
-		[IPV6] = ETH_RSS_IPV6,
-		[IPV6_1] = ETH_RSS_FRAG_IPV6,
-		[IPV6_2] = ETH_RSS_NONFRAG_IPV6_OTHER,
-		[IPV6_3] = ETH_RSS_IPV6_EX,
+		[IPV4] = RTE_ETH_RSS_IPV4,
+		[IPV4_1] = RTE_ETH_RSS_FRAG_IPV4,
+		[IPV4_2] = RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
+		[IPV6] = RTE_ETH_RSS_IPV6,
+		[IPV6_1] = RTE_ETH_RSS_FRAG_IPV6,
+		[IPV6_2] = RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
+		[IPV6_3] = RTE_ETH_RSS_IPV6_EX,
 		[TCP] = 0,
 		[UDP] = 0,
-		[IPV4_TCP] = ETH_RSS_NONFRAG_IPV4_TCP,
-		[IPV4_UDP] = ETH_RSS_NONFRAG_IPV4_UDP,
-		[IPV6_TCP] = ETH_RSS_NONFRAG_IPV6_TCP,
-		[IPV6_TCP_1] = ETH_RSS_IPV6_TCP_EX,
-		[IPV6_UDP] = ETH_RSS_NONFRAG_IPV6_UDP,
-		[IPV6_UDP_1] = ETH_RSS_IPV6_UDP_EX,
+		[IPV4_TCP] = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
+		[IPV4_UDP] = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
+		[IPV6_TCP] = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
+		[IPV6_TCP_1] = RTE_ETH_RSS_IPV6_TCP_EX,
+		[IPV6_UDP] = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+		[IPV6_UDP_1] = RTE_ETH_RSS_IPV6_UDP_EX,
 	};
 	static const uint64_t verbs[RTE_DIM(dpdk)] = {
 		[INNER] = IBV_RX_HASH_INNER,
@@ -1283,7 +1283,7 @@ mlx4_flow_internal_next_vlan(struct mlx4_priv *priv, uint16_t vlan)
  * - MAC flow rules are generated from @p dev->data->mac_addrs
  *   (@p priv->mac array).
  * - An additional flow rule for Ethernet broadcasts is also generated.
- * - All these are per-VLAN if @p DEV_RX_OFFLOAD_VLAN_FILTER
+ * - All these are per-VLAN if @p RTE_ETH_RX_OFFLOAD_VLAN_FILTER
  *   is enabled and VLAN filters are configured.
  *
  * @param priv
@@ -1358,7 +1358,7 @@ mlx4_flow_internal(struct mlx4_priv *priv, struct rte_flow_error *error)
 	struct rte_ether_addr *rule_mac = &eth_spec.dst;
 	rte_be16_t *rule_vlan =
 		(ETH_DEV(priv)->data->dev_conf.rxmode.offloads &
-		 DEV_RX_OFFLOAD_VLAN_FILTER) &&
+		 RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 		!ETH_DEV(priv)->data->promiscuous ?
 		&vlan_spec.tci :
 		NULL;
diff --git a/drivers/net/mlx4/mlx4_intr.c b/drivers/net/mlx4/mlx4_intr.c
index d56009c41845..2aab0f60a7b5 100644
--- a/drivers/net/mlx4/mlx4_intr.c
+++ b/drivers/net/mlx4/mlx4_intr.c
@@ -118,7 +118,7 @@ mlx4_rx_intr_vec_enable(struct mlx4_priv *priv)
 static void
 mlx4_link_status_alarm(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	MLX4_ASSERT(priv->intr_alarm == 1);
@@ -183,7 +183,7 @@ mlx4_interrupt_handler(struct mlx4_priv *priv)
 	};
 	uint32_t caught[RTE_DIM(type)] = { 0 };
 	struct ibv_async_event event;
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	unsigned int i;
 
@@ -280,7 +280,7 @@ mlx4_intr_uninstall(struct mlx4_priv *priv)
 int
 mlx4_intr_install(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 	int rc;
 
@@ -386,7 +386,7 @@ mlx4_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx)
 int
 mlx4_rxq_intr_enable(struct mlx4_priv *priv)
 {
-	const struct rte_intr_conf *const intr_conf =
+	const struct rte_eth_intr_conf *const intr_conf =
 		&ETH_DEV(priv)->data->dev_conf.intr_conf;
 
 	if (intr_conf->rxq && mlx4_rx_intr_vec_enable(priv) < 0)
diff --git a/drivers/net/mlx4/mlx4_rxq.c b/drivers/net/mlx4/mlx4_rxq.c
index 978cbb8201ea..9977c761880a 100644
--- a/drivers/net/mlx4/mlx4_rxq.c
+++ b/drivers/net/mlx4/mlx4_rxq.c
@@ -682,13 +682,13 @@ mlx4_rxq_detach(struct rxq *rxq)
 uint64_t
 mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_SCATTER |
-			    DEV_RX_OFFLOAD_KEEP_CRC |
-			    DEV_RX_OFFLOAD_JUMBO_FRAME |
-			    DEV_RX_OFFLOAD_RSS_HASH;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_SCATTER |
+			    RTE_ETH_RX_OFFLOAD_KEEP_CRC |
+			    RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (priv->hw_csum)
-		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	return offloads;
 }
 
@@ -704,7 +704,7 @@ mlx4_get_rx_queue_offloads(struct mlx4_priv *priv)
 uint64_t
 mlx4_get_rx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	(void)priv;
 	return offloads;
@@ -785,7 +785,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	/* By default, FCS (CRC) is stripped by hardware. */
 	crc_present = 0;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (priv->hw_fcs_strip) {
 			crc_present = 1;
 		} else {
@@ -816,9 +816,9 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts = elts,
 		/* Toggle Rx checksum offload if hardware supports it. */
 		.csum = priv->hw_csum &&
-			(offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.csum_l2tun = priv->hw_csum_l2tun &&
-			      (offloads & DEV_RX_OFFLOAD_CHECKSUM),
+			      (offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM),
 		.crc_present = crc_present,
 		.l2tun_offload = priv->hw_csum_l2tun,
 		.stats = {
@@ -831,7 +831,7 @@ mlx4_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len <=
 	    (mb_len - RTE_PKTMBUF_HEADROOM)) {
 		;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		uint32_t size =
 			RTE_PKTMBUF_HEADROOM +
 			dev->data->dev_conf.rxmode.max_rx_pkt_len;
diff --git a/drivers/net/mlx4/mlx4_txq.c b/drivers/net/mlx4/mlx4_txq.c
index 2df26842fbe4..19feec5e5202 100644
--- a/drivers/net/mlx4/mlx4_txq.c
+++ b/drivers/net/mlx4/mlx4_txq.c
@@ -273,20 +273,20 @@ mlx4_txq_fill_dv_obj_info(struct txq *txq, struct mlx4dv_obj *mlxdv)
 uint64_t
 mlx4_get_tx_port_offloads(struct mlx4_priv *priv)
 {
-	uint64_t offloads = DEV_TX_OFFLOAD_MULTI_SEGS;
+	uint64_t offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (priv->hw_csum) {
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	}
 	if (priv->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (priv->hw_csum_l2tun) {
-		offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (priv->tso)
-			offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	}
 	return offloads;
 }
@@ -394,12 +394,12 @@ mlx4_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		.elts_comp_cd_init =
 			RTE_MIN(MLX4_PMD_TX_PER_COMP_REQ, desc / 4),
 		.csum = priv->hw_csum &&
-			(offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-					   DEV_TX_OFFLOAD_UDP_CKSUM |
-					   DEV_TX_OFFLOAD_TCP_CKSUM)),
+			(offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_TX_OFFLOAD_TCP_CKSUM)),
 		.csum_l2tun = priv->hw_csum_l2tun &&
 			      (offloads &
-			       DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM),
+			       RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM),
 		/* Enable Tx loopback for VF devices. */
 		.lb = !!priv->vf,
 		.bounce_buf = bounce_buf,
diff --git a/drivers/net/mlx5/linux/mlx5_ethdev_os.c b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
index f34133e2c641..79e27fe2d668 100644
--- a/drivers/net/mlx5/linux/mlx5_ethdev_os.c
+++ b/drivers/net/mlx5/linux/mlx5_ethdev_os.c
@@ -439,24 +439,24 @@ mlx5_link_update_unlocked_gset(struct rte_eth_dev *dev,
 	}
 	link_speed = ethtool_cmd_speed(&edata);
 	if (link_speed == -1)
-		dev_link.link_speed = ETH_SPEED_NUM_UNKNOWN;
+		dev_link.link_speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	else
 		dev_link.link_speed = link_speed;
 	priv->link_speed_capa = 0;
 	if (edata.supported & (SUPPORTED_1000baseT_Full |
 			       SUPPORTED_1000baseKX_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (edata.supported & SUPPORTED_10000baseKR_Full)
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (edata.supported & (SUPPORTED_40000baseKR4_Full |
 			       SUPPORTED_40000baseCR4_Full |
 			       SUPPORTED_40000baseSR4_Full |
 			       SUPPORTED_40000baseLR4_Full))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	dev_link.link_duplex = ((edata.duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			ETH_LINK_SPEED_FIXED);
+			RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -545,45 +545,45 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		return ret;
 	}
 	dev_link.link_speed = (ecmd->speed == UINT32_MAX) ?
-				ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
+				RTE_ETH_SPEED_NUM_UNKNOWN : ecmd->speed;
 	sc = ecmd->link_mode_masks[0] |
 		((uint64_t)ecmd->link_mode_masks[1] << 32);
 	priv->link_speed_capa = 0;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseT_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_1000baseKX_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_1G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKX4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_10000baseR_FEC_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_10G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseMLD2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_20000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_20G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_20G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_40000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_40G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_56000baseLR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_56G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_56G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseCR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseKR_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_25000baseSR_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_25G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_50000baseKR2_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_50G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseSR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseCR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_100000baseLR4_ER4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_100G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_100G;
 	if (sc & (MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseKR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseSR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 
 	sc = ecmd->link_mode_masks[2] |
 		((uint64_t)ecmd->link_mode_masks[3] << 32);
@@ -591,11 +591,11 @@ mlx5_link_update_unlocked_gs(struct rte_eth_dev *dev,
 		  MLX5_BITSHIFT
 		       (ETHTOOL_LINK_MODE_200000baseLR4_ER4_FR4_Full_BIT) |
 		  MLX5_BITSHIFT(ETHTOOL_LINK_MODE_200000baseDR4_Full_BIT)))
-		priv->link_speed_capa |= ETH_LINK_SPEED_200G;
+		priv->link_speed_capa |= RTE_ETH_LINK_SPEED_200G;
 	dev_link.link_duplex = ((ecmd->duplex == DUPLEX_HALF) ?
-				ETH_LINK_HALF_DUPLEX : ETH_LINK_FULL_DUPLEX);
+				RTE_ETH_LINK_HALF_DUPLEX : RTE_ETH_LINK_FULL_DUPLEX);
 	dev_link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-				  ETH_LINK_SPEED_FIXED);
+				  RTE_ETH_LINK_SPEED_FIXED);
 	*link = dev_link;
 	return 0;
 }
@@ -677,13 +677,13 @@ mlx5_dev_get_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 	fc_conf->autoneg = ethpause.autoneg;
 	if (ethpause.rx_pause && ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (ethpause.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (ethpause.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -709,14 +709,14 @@ mlx5_dev_set_flow_ctrl(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	ifr.ifr_data = (void *)&ethpause;
 	ethpause.autoneg = fc_conf->autoneg;
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_RX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_RX_PAUSE))
 		ethpause.rx_pause = 1;
 	else
 		ethpause.rx_pause = 0;
 
-	if (((fc_conf->mode & RTE_FC_FULL) == RTE_FC_FULL) ||
-	    (fc_conf->mode & RTE_FC_TX_PAUSE))
+	if (((fc_conf->mode & RTE_ETH_FC_FULL) == RTE_ETH_FC_FULL) ||
+	    (fc_conf->mode & RTE_ETH_FC_TX_PAUSE))
 		ethpause.tx_pause = 1;
 	else
 		ethpause.tx_pause = 0;
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 5f8766aa481e..c40cda8fcaf9 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1343,8 +1343,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	config->hw_vlan_strip = !!(sh->device_attr.raw_packet_caps &
@@ -1627,7 +1627,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	/*
 	 * If HW has bug working with tunnel packet decapsulation and
 	 * scatter FCS, and decapsulation is needed, clear the hw_fcs_strip
-	 * bit. Then DEV_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
+	 * bit. Then RTE_ETH_RX_OFFLOAD_KEEP_CRC bit will not be set anymore.
 	 */
 	if (config->hca_attr.scatter_fcs_w_decap_disable && config->decap_en)
 		config->hw_fcs_strip = 0;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index f84e061fe719..ff1c8e17460a 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1463,10 +1463,10 @@ mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev __rte_unused,
 			 struct rte_eth_udp_tunnel *udp_tunnel)
 {
 	MLX5_ASSERT(udp_tunnel != NULL);
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN &&
 	    udp_tunnel->udp_port == 4789)
 		return 0;
-	if (udp_tunnel->prot_type == RTE_TUNNEL_TYPE_VXLAN_GPE &&
+	if (udp_tunnel->prot_type == RTE_ETH_TUNNEL_TYPE_VXLAN_GPE &&
 	    udp_tunnel->udp_port == 4790)
 		return 0;
 	return -ENOTSUP;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index e02714e23196..9588dff05180 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1226,7 +1226,7 @@ TAILQ_HEAD(mlx5_legacy_flow_meters, mlx5_legacy_flow_meter);
 struct mlx5_flow_rss_desc {
 	uint32_t level;
 	uint32_t queue_num; /**< Number of entries in @p queue. */
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint64_t hash_fields; /* Verbs Hash fields. */
 	uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
 	uint32_t key_len; /**< RSS hash key len. */
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index fe86bb40d351..12ddf4c7ff28 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -90,11 +90,11 @@
 #define MLX5_VPMD_DESCS_PER_LOOP      4
 
 /* Mask of RSS on source only or destination only. */
-#define MLX5_RSS_SRC_DST_ONLY (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY | \
-			       ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define MLX5_RSS_SRC_DST_ONLY (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY | \
+			       RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
 /* Supported RSS */
-#define MLX5_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP | \
+#define MLX5_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP | \
 			    MLX5_RSS_SRC_DST_ONLY))
 
 /* Timeout in seconds to get a valid link status. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index 82e2284d9866..f2b78c3cc69e 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -91,7 +91,7 @@ mlx5_dev_configure(struct rte_eth_dev *dev)
 	}
 
 	if ((dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
+			RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP) &&
 			rte_mbuf_dyn_tx_timestamp_register(NULL, NULL) != 0) {
 		DRV_LOG(ERR, "port %u cannot register Tx timestamp field/flag",
 			dev->data->port_id);
@@ -225,8 +225,8 @@ mlx5_set_default_params(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
 	info->default_txportconf.ring_size = 256;
 	info->default_rxportconf.burst_size = MLX5_RX_DEFAULT_BURST;
 	info->default_txportconf.burst_size = MLX5_TX_DEFAULT_BURST;
-	if ((priv->link_speed_capa & ETH_LINK_SPEED_200G) |
-		(priv->link_speed_capa & ETH_LINK_SPEED_100G)) {
+	if ((priv->link_speed_capa & RTE_ETH_LINK_SPEED_200G) |
+		(priv->link_speed_capa & RTE_ETH_LINK_SPEED_100G)) {
 		info->default_rxportconf.nb_queues = 16;
 		info->default_txportconf.nb_queues = 16;
 		if (dev->data->nb_rx_queues > 2 ||
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 4762fa0f5f88..7048fff3883e 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -98,7 +98,7 @@ struct mlx5_flow_expand_node {
 	uint64_t rss_types;
 	/**<
 	 * RSS types bit-field associated with this node
-	 * (see ETH_RSS_* definitions).
+	 * (see RTE_ETH_RSS_* definitions).
 	 */
 	uint64_t node_flags;
 	/**<
@@ -272,7 +272,7 @@ mlx5_flow_expand_rss_item_complete(const struct rte_flow_item *item)
  * @param[in] pattern
  *   User flow pattern.
  * @param[in] types
- *   RSS types to expand (see ETH_RSS_* definitions).
+ *   RSS types to expand (see RTE_ETH_RSS_* definitions).
  * @param[in] graph
  *   Input graph to expand @p pattern according to @p types.
  * @param[in] graph_root_index
@@ -522,8 +522,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_IPV4,
 			 MLX5_EXPANSION_IPV6),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -531,11 +531,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT
@@ -546,8 +546,8 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 			 MLX5_EXPANSION_GRE,
 			 MLX5_EXPANSION_NVGRE),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_UDP] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_VXLAN,
@@ -555,11 +555,11 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 						  MLX5_EXPANSION_MPLS,
 						  MLX5_EXPANSION_GTP),
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_OUTER_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_VXLAN] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_ETH,
@@ -612,32 +612,32 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV4_UDP,
 						  MLX5_EXPANSION_IPV4_TCP),
 		.type = RTE_FLOW_ITEM_TYPE_IPV4,
-		.rss_types = ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 |
-			ETH_RSS_NONFRAG_IPV4_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 |
+			RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	},
 	[MLX5_EXPANSION_IPV4_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	},
 	[MLX5_EXPANSION_IPV4_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV4_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	},
 	[MLX5_EXPANSION_IPV6] = {
 		.next = MLX5_FLOW_EXPAND_RSS_NEXT(MLX5_EXPANSION_IPV6_UDP,
 						  MLX5_EXPANSION_IPV6_TCP,
 						  MLX5_EXPANSION_IPV6_FRAG_EXT),
 		.type = RTE_FLOW_ITEM_TYPE_IPV6,
-		.rss_types = ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 |
-			ETH_RSS_NONFRAG_IPV6_OTHER,
+		.rss_types = RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 |
+			RTE_ETH_RSS_NONFRAG_IPV6_OTHER,
 	},
 	[MLX5_EXPANSION_IPV6_UDP] = {
 		.type = RTE_FLOW_ITEM_TYPE_UDP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_UDP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_UDP,
 	},
 	[MLX5_EXPANSION_IPV6_TCP] = {
 		.type = RTE_FLOW_ITEM_TYPE_TCP,
-		.rss_types = ETH_RSS_NONFRAG_IPV6_TCP,
+		.rss_types = RTE_ETH_RSS_NONFRAG_IPV6_TCP,
 	},
 	[MLX5_EXPANSION_IPV6_FRAG_EXT] = {
 		.type = RTE_FLOW_ITEM_TYPE_IPV6_FRAG_EXT,
@@ -1048,7 +1048,7 @@ mlx5_flow_item_acceptable(const struct rte_flow_item *item,
  * @param[in] tunnel
  *   1 when the hash field is for a tunnel item.
  * @param[in] layer_types
- *   ETH_RSS_* types.
+ *   RTE_ETH_RSS_* types.
  * @param[in] hash_fields
  *   Item hash fields.
  *
@@ -1601,14 +1601,14 @@ mlx5_validate_action_rss(struct rte_eth_dev *dev,
 					  &rss->types,
 					  "some RSS protocols are not"
 					  " supported");
-	if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
-	    !(rss->types & ETH_RSS_IP))
+	if ((rss->types & (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY)) &&
+	    !(rss->types & RTE_ETH_RSS_IP))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L3 partial RSS requested but L3 RSS"
 					  " type not specified");
-	if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
-	    !(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
+	if ((rss->types & (RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)) &&
+	    !(rss->types & (RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP)))
 		return rte_flow_error_set(error, EINVAL,
 					  RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
 					  "L4 partial RSS requested but L4 RSS"
@@ -6364,8 +6364,8 @@ flow_list_create(struct rte_eth_dev *dev, enum mlx5_flow_type type,
 		 * mlx5_flow_hashfields_adjust() in advance.
 		 */
 		rss_desc->level = rss->level;
-		/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-		rss_desc->types = !rss->types ? ETH_RSS_IP : rss->types;
+		/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+		rss_desc->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	}
 	flow->dev_handles = 0;
 	if (rss && rss->types) {
@@ -6989,7 +6989,7 @@ mlx5_ctrl_flow_vlan(struct rte_eth_dev *dev,
 	if (!priv->reta_idx_n || !priv->rxqs_n) {
 		return 0;
 	}
-	if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+	if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 		action_rss.types = 0;
 	for (i = 0; i != priv->reta_idx_n; ++i)
 		queue[i] = (*priv->reta_idx)[i];
@@ -8657,7 +8657,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
 				(error, EINVAL,
 				RTE_FLOW_ERROR_TYPE_ACTION_CONF,
 				NULL, "invalid port configuration");
-		if (!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG))
+		if (!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG))
 			ctx->action_rss.types = 0;
 		for (i = 0; i != priv->reta_idx_n; ++i)
 			ctx->queue[i] = (*priv->reta_idx)[i];
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 76ad53f2a1e8..d5d3a89374fe 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -328,18 +328,18 @@ enum mlx5_feature_name {
 
 /* Valid layer type for IPV4 RSS. */
 #define MLX5_IPV4_LAYER_TYPES \
-	(ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | \
-	 ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \
-	 ETH_RSS_NONFRAG_IPV4_OTHER)
+	(RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	 RTE_ETH_RSS_NONFRAG_IPV4_OTHER)
 
 /* IBV hash source bits  for IPV4. */
 #define MLX5_IPV4_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
 
 /* Valid layer type for IPV6 RSS. */
 #define MLX5_IPV6_LAYER_TYPES \
-	(ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_TCP | \
-	 ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_EX  | ETH_RSS_IPV6_TCP_EX | \
-	 ETH_RSS_IPV6_UDP_EX | ETH_RSS_NONFRAG_IPV6_OTHER)
+	(RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	 RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_EX  | RTE_ETH_RSS_IPV6_TCP_EX | \
+	 RTE_ETH_RSS_IPV6_UDP_EX | RTE_ETH_RSS_NONFRAG_IPV6_OTHER)
 
 /* IBV hash source bits  for IPV6. */
 #define MLX5_IPV6_IBV_RX_HASH (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 3f6f5dcfbadb..02a337dc2c93 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10934,9 +10934,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV4)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV4))) {
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV4;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV4;
 			else
 				dev_flow->hash_fields |= MLX5_IPV4_IBV_RX_HASH;
@@ -10944,9 +10944,9 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L3_IPV6)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L3_IPV6))) {
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
-			if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_SRC_IPV6;
-			else if (rss_types & ETH_RSS_L3_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				dev_flow->hash_fields |= IBV_RX_HASH_DST_IPV6;
 			else
 				dev_flow->hash_fields |= MLX5_IPV6_IBV_RX_HASH;
@@ -10960,11 +10960,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		return;
 	if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_UDP)) ||
 	    (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_UDP))) {
-		if (rss_types & ETH_RSS_UDP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_UDP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_UDP;
 			else
@@ -10972,11 +10972,11 @@ flow_dv_hashfields_set(struct mlx5_flow *dev_flow,
 		}
 	} else if ((rss_inner && (items & MLX5_FLOW_LAYER_INNER_L4_TCP)) ||
 		   (!rss_inner && (items & MLX5_FLOW_LAYER_OUTER_L4_TCP))) {
-		if (rss_types & ETH_RSS_TCP) {
-			if (rss_types & ETH_RSS_L4_SRC_ONLY)
+		if (rss_types & RTE_ETH_RSS_TCP) {
+			if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_SRC_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_DST_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				dev_flow->hash_fields |=
 						IBV_RX_HASH_DST_PORT_TCP;
 			else
@@ -14495,9 +14495,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4:
 		if (rss_types & MLX5_IPV4_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV4;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV4;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV4;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV4;
@@ -14506,9 +14506,9 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV6:
 		if (rss_types & MLX5_IPV6_LAYER_TYPES) {
 			*hash_field &= ~MLX5_RSS_HASH_IPV6;
-			if (rss_types & ETH_RSS_L3_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L3_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_IPV6;
-			else if (rss_types & ETH_RSS_L3_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L3_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_IPV6;
 			else
 				*hash_field |= MLX5_RSS_HASH_IPV6;
@@ -14517,11 +14517,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_UDP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_UDP:
-		if (rss_types & ETH_RSS_UDP) {
+		if (rss_types & RTE_ETH_RSS_UDP) {
 			*hash_field &= ~MLX5_UDP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_UDP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_UDP;
 			else
 				*hash_field |= MLX5_UDP_IBV_RX_HASH;
@@ -14530,11 +14530,11 @@ __flow_dv_action_rss_l34_hash_adjust(struct mlx5_shared_action_rss *rss,
 	case MLX5_RSS_HASH_IPV4_TCP:
 		/* fall-through. */
 	case MLX5_RSS_HASH_IPV6_TCP:
-		if (rss_types & ETH_RSS_TCP) {
+		if (rss_types & RTE_ETH_RSS_TCP) {
 			*hash_field &= ~MLX5_TCP_IBV_RX_HASH;
-			if (rss_types & ETH_RSS_L4_DST_ONLY)
+			if (rss_types & RTE_ETH_RSS_L4_DST_ONLY)
 				*hash_field |= IBV_RX_HASH_DST_PORT_TCP;
-			else if (rss_types & ETH_RSS_L4_SRC_ONLY)
+			else if (rss_types & RTE_ETH_RSS_L4_SRC_ONLY)
 				*hash_field |= IBV_RX_HASH_SRC_PORT_TCP;
 			else
 				*hash_field |= MLX5_TCP_IBV_RX_HASH;
@@ -14682,8 +14682,8 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev,
 	origin = &shared_rss->origin;
 	origin->func = rss->func;
 	origin->level = rss->level;
-	/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
-	origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+	/* RSS type 0 indicates default RSS type (RTE_ETH_RSS_IP). */
+	origin->types = !rss->types ? RTE_ETH_RSS_IP : rss->types;
 	/* NULL RSS key indicates default RSS key. */
 	rss_key = !rss->key ? rss_hash_default_key : rss->key;
 	memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index b93fd4d2c962..ef286a13729c 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1834,7 +1834,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_TCP,
+					(rss_desc, tunnel, RTE_ETH_RSS_TCP,
 					 (IBV_RX_HASH_SRC_PORT_TCP |
 					  IBV_RX_HASH_DST_PORT_TCP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
@@ -1847,7 +1847,7 @@ flow_verbs_translate(struct rte_eth_dev *dev,
 			if (dev_flow->hash_fields != 0)
 				dev_flow->hash_fields |=
 					mlx5_flow_hashfields_adjust
-					(rss_desc, tunnel, ETH_RSS_UDP,
+					(rss_desc, tunnel, RTE_ETH_RSS_UDP,
 					 (IBV_RX_HASH_SRC_PORT_UDP |
 					  IBV_RX_HASH_DST_PORT_UDP));
 			item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
diff --git a/drivers/net/mlx5/mlx5_rss.c b/drivers/net/mlx5/mlx5_rss.c
index c32129cdc2b8..a4f690039e24 100644
--- a/drivers/net/mlx5/mlx5_rss.c
+++ b/drivers/net/mlx5/mlx5_rss.c
@@ -68,7 +68,7 @@ mlx5_rss_hash_update(struct rte_eth_dev *dev,
 		if (!(*priv->rxqs)[i])
 			continue;
 		(*priv->rxqs)[i]->rss_hash = !!rss_conf->rss_hf &&
-			!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS);
+			!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS);
 		++idx;
 	}
 	return 0;
@@ -170,8 +170,8 @@ mlx5_dev_rss_reta_query(struct rte_eth_dev *dev,
 	}
 	/* Fill each entry of the table even if its bit is not set. */
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		reta_conf[idx].reta[i % RTE_RETA_GROUP_SIZE] =
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		reta_conf[idx].reta[i % RTE_ETH_RETA_GROUP_SIZE] =
 			(*priv->reta_idx)[i];
 	}
 	return 0;
@@ -209,8 +209,8 @@ mlx5_dev_rss_reta_update(struct rte_eth_dev *dev,
 	if (ret)
 		return ret;
 	for (idx = 0, i = 0; (i != reta_size); ++i) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (((reta_conf[idx].mask >> i) & 0x1) == 0)
 			continue;
 		MLX5_ASSERT(reta_conf[idx].reta[pos] < priv->rxqs_n);
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index abd8ce798986..0d6c58f47d89 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -333,23 +333,23 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_dev_config *config = &priv->config;
-	uint64_t offloads = (DEV_RX_OFFLOAD_SCATTER |
-			     DEV_RX_OFFLOAD_TIMESTAMP |
-			     DEV_RX_OFFLOAD_JUMBO_FRAME |
-			     DEV_RX_OFFLOAD_RSS_HASH);
+	uint64_t offloads = (RTE_ETH_RX_OFFLOAD_SCATTER |
+			     RTE_ETH_RX_OFFLOAD_TIMESTAMP |
+			     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+			     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 
 	if (!config->mprq.enabled)
 		offloads |= RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT;
 	if (config->hw_fcs_strip)
-		offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	if (config->hw_csum)
-		offloads |= (DEV_RX_OFFLOAD_IPV4_CKSUM |
-			     DEV_RX_OFFLOAD_UDP_CKSUM |
-			     DEV_RX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_RX_OFFLOAD_TCP_CKSUM);
 	if (config->hw_vlan_strip)
-		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	if (MLX5_LRO_SUPPORTED(dev))
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 	return offloads;
 }
 
@@ -363,7 +363,7 @@ mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev)
 uint64_t
 mlx5_get_rx_port_offloads(void)
 {
-	uint64_t offloads = DEV_RX_OFFLOAD_VLAN_FILTER;
+	uint64_t offloads = RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 
 	return offloads;
 }
@@ -695,7 +695,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 				    dev->data->dev_conf.rxmode.offloads;
 
 		/* The offloads should be checked on rte_eth_dev layer. */
-		MLX5_ASSERT(offloads & DEV_RX_OFFLOAD_SCATTER);
+		MLX5_ASSERT(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) {
 			DRV_LOG(ERR, "port %u queue index %u split "
 				     "offload not configured",
@@ -1329,7 +1329,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	struct mlx5_dev_config *config = &priv->config;
 	uint64_t offloads = conf->offloads |
 			   dev->data->dev_conf.rxmode.offloads;
-	unsigned int lro_on_queue = !!(offloads & DEV_RX_OFFLOAD_TCP_LRO);
+	unsigned int lro_on_queue = !!(offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO);
 	unsigned int max_rx_pkt_len = lro_on_queue ?
 			dev->data->dev_conf.rxmode.max_lro_pkt_size :
 			dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -1431,7 +1431,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	} while (tail_len || !rte_is_power_of_2(tmpl->rxq.rxseg_n));
 	MLX5_ASSERT(tmpl->rxq.rxseg_n &&
 		    tmpl->rxq.rxseg_n <= MLX5_MAX_RXQ_NSEG);
-	if (tmpl->rxq.rxseg_n > 1 && !(offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	if (tmpl->rxq.rxseg_n > 1 && !(offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		DRV_LOG(ERR, "port %u Rx queue %u: Scatter offload is not"
 			" configured and no enough mbuf space(%u) to contain "
 			"the maximum RX packet length(%u) with head-room(%u)",
@@ -1475,7 +1475,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 			config->mprq.stride_size_n : mprq_stride_size;
 		tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT;
 		tmpl->rxq.strd_scatter_en =
-				!!(offloads & DEV_RX_OFFLOAD_SCATTER);
+				!!(offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 		tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size,
 				config->mprq.max_memcpy_len);
 		max_lro_size = RTE_MIN(max_rx_pkt_len,
@@ -1490,7 +1490,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size);
 		tmpl->rxq.sges_n = 0;
 		max_lro_size = max_rx_pkt_len;
-	} else if (offloads & DEV_RX_OFFLOAD_SCATTER) {
+	} else if (offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		unsigned int sges_n;
 
 		if (lro_on_queue && first_mb_free_size <
@@ -1551,9 +1551,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	}
 	mlx5_max_lro_msg_size_adjust(dev, idx, max_lro_size);
 	/* Toggle RX checksum offload if hardware supports it. */
-	tmpl->rxq.csum = !!(offloads & DEV_RX_OFFLOAD_CHECKSUM);
+	tmpl->rxq.csum = !!(offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM);
 	/* Configure Rx timestamp. */
-	tmpl->rxq.hw_timestamp = !!(offloads & DEV_RX_OFFLOAD_TIMESTAMP);
+	tmpl->rxq.hw_timestamp = !!(offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP);
 	tmpl->rxq.timestamp_rx_flag = 0;
 	if (tmpl->rxq.hw_timestamp && rte_mbuf_dyn_rx_timestamp_register(
 			&tmpl->rxq.timestamp_offset,
@@ -1562,11 +1562,11 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		goto error;
 	}
 	/* Configure VLAN stripping. */
-	tmpl->rxq.vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	tmpl->rxq.vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	/* By default, FCS (CRC) is stripped by hardware. */
 	tmpl->rxq.crc_present = 0;
 	tmpl->rxq.lro = lro_on_queue;
-	if (offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		if (config->hw_fcs_strip) {
 			/*
 			 * RQs used for LRO-enabled TIRs should not be
@@ -1596,7 +1596,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 		tmpl->rxq.crc_present << 2);
 	/* Save port ID. */
 	tmpl->rxq.rss_hash = !!priv->rss_conf.rss_hf &&
-		(!!(dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS));
+		(!!(dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS));
 	tmpl->rxq.port_id = dev->data->port_id;
 	tmpl->priv = priv;
 	tmpl->rxq.mp = rx_seg[0].mp;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.h b/drivers/net/mlx5/mlx5_rxtx_vec.h
index 93b4f517bb3e..65d91bdf67e2 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.h
@@ -16,10 +16,10 @@
 
 /* HW checksum offload capabilities of vectorized Tx. */
 #define MLX5_VEC_TX_CKSUM_OFFLOAD_CAP \
-	(DEV_TX_OFFLOAD_IPV4_CKSUM | \
-	 DEV_TX_OFFLOAD_UDP_CKSUM | \
-	 DEV_TX_OFFLOAD_TCP_CKSUM | \
-	 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
+	 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 
 /*
  * Compile time sanity check for vectorized functions.
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index df671379e46d..12aeba60348a 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -523,36 +523,36 @@ mlx5_select_tx_function(struct rte_eth_dev *dev)
 	unsigned int diff = 0, olx = 0, i, m;
 
 	MLX5_ASSERT(priv);
-	if (tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 		/* We should support Multi-Segment Packets. */
 		olx |= MLX5_TXOFF_CONFIG_MULTI;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			   DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			   DEV_TX_OFFLOAD_GRE_TNL_TSO |
-			   DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			   RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO)) {
 		/* We should support TCP Send Offload. */
 		olx |= MLX5_TXOFF_CONFIG_TSO;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IP_TNL_TSO |
-			   DEV_TX_OFFLOAD_UDP_TNL_TSO |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support Software Parser for Tunnels. */
 		olx |= MLX5_TXOFF_CONFIG_SWP;
 	}
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		/* We should support IP/TCP/UDP Checksums. */
 		olx |= MLX5_TXOFF_CONFIG_CSUM;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_VLAN_INSERT) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) {
 		/* We should support VLAN insertion. */
 		olx |= MLX5_TXOFF_CONFIG_VLAN;
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP &&
 	    rte_mbuf_dynflag_lookup
 			(RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME, NULL) >= 0 &&
 	    rte_mbuf_dynfield_lookup
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index eb4d34ca559e..06cdeba662bc 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -98,35 +98,35 @@ uint64_t
 mlx5_get_tx_port_offloads(struct rte_eth_dev *dev)
 {
 	struct mlx5_priv *priv = dev->data->dev_private;
-	uint64_t offloads = (DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_VLAN_INSERT);
+	uint64_t offloads = (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT);
 	struct mlx5_dev_config *config = &priv->config;
 
 	if (config->hw_csum)
-		offloads |= (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_UDP_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM);
+		offloads |= (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM);
 	if (config->tso)
-		offloads |= DEV_TX_OFFLOAD_TCP_TSO;
+		offloads |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (config->tx_pp)
-		offloads |= DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP;
+		offloads |= RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP;
 	if (config->swp) {
 		if (config->hw_csum)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->tso)
-			offloads |= (DEV_TX_OFFLOAD_IP_TNL_TSO |
-				     DEV_TX_OFFLOAD_UDP_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	}
 	if (config->tunnel_en) {
 		if (config->hw_csum)
-			offloads |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+			offloads |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 		if (config->tso)
-			offloads |= (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GRE_TNL_TSO |
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+			offloads |= (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
 	}
 	if (!config->mprq.enabled)
-		offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+		offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	return offloads;
 }
 
@@ -801,17 +801,17 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	unsigned int inlen_mode; /* Minimal required Inline data. */
 	unsigned int txqs_inline; /* Min Tx queues to enable inline. */
 	uint64_t dev_txoff = priv->dev_data->dev_conf.txmode.offloads;
-	bool tso = txq_ctrl->txq.offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-					    DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-					    DEV_TX_OFFLOAD_GRE_TNL_TSO |
-					    DEV_TX_OFFLOAD_IP_TNL_TSO |
-					    DEV_TX_OFFLOAD_UDP_TNL_TSO);
+	bool tso = txq_ctrl->txq.offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+					    RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+					    RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO);
 	bool vlan_inline;
 	unsigned int temp;
 
 	txq_ctrl->txq.fast_free =
-		!!((txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE) &&
-		   !(txq_ctrl->txq.offloads & DEV_TX_OFFLOAD_MULTI_SEGS) &&
+		!!((txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) &&
+		   !(txq_ctrl->txq.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) &&
 		   !config->mprq.enabled);
 	if (config->txqs_inline == MLX5_ARG_UNSET)
 		txqs_inline =
@@ -870,7 +870,7 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 	 * tx_burst routine.
 	 */
 	txq_ctrl->txq.vlan_en = config->hw_vlan_insert;
-	vlan_inline = (dev_txoff & DEV_TX_OFFLOAD_VLAN_INSERT) &&
+	vlan_inline = (dev_txoff & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) &&
 		      !config->hw_vlan_insert;
 	/*
 	 * If there are few Tx queues it is prioritized
@@ -979,9 +979,9 @@ txq_set_params(struct mlx5_txq_ctrl *txq_ctrl)
 		txq_ctrl->txq.tso_en = 1;
 	}
 	txq_ctrl->txq.tunnel_en = config->tunnel_en | config->swp;
-	txq_ctrl->txq.swp_en = ((DEV_TX_OFFLOAD_IP_TNL_TSO |
-				 DEV_TX_OFFLOAD_UDP_TNL_TSO |
-				 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) &
+	txq_ctrl->txq.swp_en = ((RTE_ETH_TX_OFFLOAD_IP_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO |
+				 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) &
 				txq_ctrl->txq.offloads) && config->swp;
 }
 
diff --git a/drivers/net/mlx5/mlx5_vlan.c b/drivers/net/mlx5/mlx5_vlan.c
index 60f97f2d2d1f..07792fc5d94f 100644
--- a/drivers/net/mlx5/mlx5_vlan.c
+++ b/drivers/net/mlx5/mlx5_vlan.c
@@ -142,9 +142,9 @@ mlx5_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct mlx5_priv *priv = dev->data->dev_private;
 	unsigned int i;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		int hw_vlan_strip = !!(dev->data->dev_conf.rxmode.offloads &
-				       DEV_RX_OFFLOAD_VLAN_STRIP);
+				       RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 		if (!priv->config.hw_vlan_strip) {
 			DRV_LOG(ERR, "port %u VLAN stripping is not supported",
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index 7e1df1c75147..578816fe0513 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -464,8 +464,8 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 	 * Remove this check once DPDK supports larger/variable
 	 * indirection tables.
 	 */
-	if (config->ind_table_max_size > (unsigned int)ETH_RSS_RETA_SIZE_512)
-		config->ind_table_max_size = ETH_RSS_RETA_SIZE_512;
+	if (config->ind_table_max_size > (unsigned int)RTE_ETH_RSS_RETA_SIZE_512)
+		config->ind_table_max_size = RTE_ETH_RSS_RETA_SIZE_512;
 	DRV_LOG(DEBUG, "maximum Rx indirection table size is %u",
 		config->ind_table_max_size);
 	DRV_LOG(DEBUG, "VLAN stripping is %ssupported",
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index a3ee15020466..37803fe34538 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -114,7 +114,7 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 	struct mvneta_priv *priv = dev->data->dev_private;
 	struct neta_ppio_params *ppio_params;
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE) {
 		MVNETA_LOG(INFO, "Unsupported RSS and rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		if (dev->data->nb_rx_queues > 1)
@@ -126,11 +126,11 @@ mvneta_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
 				 MRVL_NETA_ETH_HDRS_LEN;
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ppio_params = &priv->ppio_params;
@@ -155,10 +155,10 @@ static int
 mvneta_dev_infos_get(struct rte_eth_dev *dev __rte_unused,
 		   struct rte_eth_dev_info *info)
 {
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G;
 
 	info->max_rx_queues = MRVL_NETA_RXQ_MAX;
 	info->max_tx_queues = MRVL_NETA_TXQ_MAX;
@@ -510,28 +510,28 @@ mvneta_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 
 	neta_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/mvneta/mvneta_ethdev.h b/drivers/net/mvneta/mvneta_ethdev.h
index ef8067790f82..ccd47e8f4927 100644
--- a/drivers/net/mvneta/mvneta_ethdev.h
+++ b/drivers/net/mvneta/mvneta_ethdev.h
@@ -54,15 +54,15 @@
 #define MRVL_NETA_MRU_TO_MTU(mru)	((mru) - MRVL_NETA_HDRS_LEN)
 
 /** Rx offloads capabilities */
-#define MVNETA_RX_OFFLOADS (DEV_RX_OFFLOAD_JUMBO_FRAME | \
-			    DEV_RX_OFFLOAD_CHECKSUM)
+#define MVNETA_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+			    RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Tx offloads capabilities */
-#define MVNETA_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				    DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MVNETA_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				    RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MVNETA_TX_OFFLOADS (MVNETA_TX_OFFLOAD_CHECKSUM | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS)
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MVNETA_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 				PKT_TX_TCP_CKSUM | \
diff --git a/drivers/net/mvneta/mvneta_rxtx.c b/drivers/net/mvneta/mvneta_rxtx.c
index dfa7ecc09039..d28125ce9635 100644
--- a/drivers/net/mvneta/mvneta_rxtx.c
+++ b/drivers/net/mvneta/mvneta_rxtx.c
@@ -735,7 +735,7 @@ mvneta_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 	rxq->priv = priv;
 	rxq->mp = mp;
 	rxq->cksum_enabled = dev->data->dev_conf.rxmode.offloads &
-			     DEV_RX_OFFLOAD_IPV4_CKSUM;
+			     RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	rxq->size = desc;
diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c
index 078aefbb8da4..539e196b807e 100644
--- a/drivers/net/mvpp2/mrvl_ethdev.c
+++ b/drivers/net/mvpp2/mrvl_ethdev.c
@@ -58,16 +58,16 @@
 #define MRVL_COOKIE_HIGH_ADDR_MASK 0xffffff0000000000
 
 /** Port Rx offload capabilities */
-#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \
-			  DEV_RX_OFFLOAD_JUMBO_FRAME | \
-			  DEV_RX_OFFLOAD_CHECKSUM)
+#define MRVL_RX_OFFLOADS (RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			  RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+			  RTE_ETH_RX_OFFLOAD_CHECKSUM)
 
 /** Port Tx offloads capabilities */
-#define MRVL_TX_OFFLOAD_CHECKSUM (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-				  DEV_TX_OFFLOAD_UDP_CKSUM  | \
-				  DEV_TX_OFFLOAD_TCP_CKSUM)
+#define MRVL_TX_OFFLOAD_CHECKSUM (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 #define MRVL_TX_OFFLOADS (MRVL_TX_OFFLOAD_CHECKSUM | \
-			  DEV_TX_OFFLOAD_MULTI_SEGS)
+			  RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define MRVL_TX_PKT_OFFLOADS (PKT_TX_IP_CKSUM | \
 			      PKT_TX_TCP_CKSUM | \
@@ -443,14 +443,14 @@ mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf)
 
 	if (rss_conf->rss_hf == 0) {
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
-	} else if (rss_conf->rss_hf & ETH_RSS_IPV4) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_2_TUPLE;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 1;
-	} else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) {
+	} else if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) {
 		priv->ppio_params.inqs_params.hash_type =
 			PP2_PPIO_HASH_T_5_TUPLE;
 		priv->rss_hf_tcp = 0;
@@ -484,8 +484,8 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE &&
-	    dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(INFO, "Unsupported rx multi queue mode %d",
 			dev->data->dev_conf.rxmode.mq_mode);
 		return -EINVAL;
@@ -496,7 +496,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len -
 				 MRVL_PP2_ETH_HDRS_LEN;
 		if (dev->data->mtu > priv->max_mtu) {
@@ -508,7 +508,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		priv->multiseg = 1;
 
 	ret = mrvl_configure_rxqs(priv, dev->data->port_id,
@@ -530,7 +530,7 @@ mrvl_dev_configure(struct rte_eth_dev *dev)
 		return ret;
 
 	if (dev->data->nb_rx_queues == 1 &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		MRVL_LOG(WARNING, "Disabling hash for 1 rx queue");
 		priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE;
 		priv->configured = 1;
@@ -632,7 +632,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_UP;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 		return 0;
 	}
 
@@ -653,7 +653,7 @@ mrvl_dev_set_link_up(struct rte_eth_dev *dev)
 		return ret;
 	}
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -673,14 +673,14 @@ mrvl_dev_set_link_down(struct rte_eth_dev *dev)
 	int ret;
 
 	if (!priv->ppio) {
-		dev->data->dev_link.link_status = ETH_LINK_DOWN;
+		dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 		return 0;
 	}
 	ret = pp2_ppio_disable(priv->ppio);
 	if (ret)
 		return ret;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
@@ -902,7 +902,7 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 	if (dev->data->all_multicast == 1)
 		mrvl_allmulticast_enable(dev);
 
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 		ret = mrvl_populate_vlan_table(dev, 1);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to populate VLAN table");
@@ -938,11 +938,11 @@ mrvl_dev_start(struct rte_eth_dev *dev)
 		priv->flow_ctrl = 0;
 	}
 
-	if (dev->data->dev_link.link_status == ETH_LINK_UP) {
+	if (dev->data->dev_link.link_status == RTE_ETH_LINK_UP) {
 		ret = mrvl_dev_set_link_up(dev);
 		if (ret) {
 			MRVL_LOG(ERR, "Failed to set link up");
-			dev->data->dev_link.link_status = ETH_LINK_DOWN;
+			dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 			goto out;
 		}
 	}
@@ -1211,30 +1211,30 @@ mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 
 	switch (ethtool_cmd_speed(&edata)) {
 	case SPEED_10:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		break;
 	case SPEED_100:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 	case SPEED_1000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 	case SPEED_2500:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_2_5G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 	case SPEED_10000:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	default:
-		dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE;
+		dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	}
 
-	dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX :
-							 ETH_LINK_HALF_DUPLEX;
-	dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG :
-							   ETH_LINK_FIXED;
+	dev->data->dev_link.link_duplex = edata.duplex ? RTE_ETH_LINK_FULL_DUPLEX :
+							 RTE_ETH_LINK_HALF_DUPLEX;
+	dev->data->dev_link.link_autoneg = edata.autoneg ? RTE_ETH_LINK_AUTONEG :
+							   RTE_ETH_LINK_FIXED;
 	pp2_ppio_get_link_state(priv->ppio, &link_up);
-	dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -1718,11 +1718,11 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 {
 	struct mrvl_priv *priv = dev->data->dev_private;
 
-	info->speed_capa = ETH_LINK_SPEED_10M |
-			   ETH_LINK_SPEED_100M |
-			   ETH_LINK_SPEED_1G |
-			   ETH_LINK_SPEED_2_5G |
-			   ETH_LINK_SPEED_10G;
+	info->speed_capa = RTE_ETH_LINK_SPEED_10M |
+			   RTE_ETH_LINK_SPEED_100M |
+			   RTE_ETH_LINK_SPEED_1G |
+			   RTE_ETH_LINK_SPEED_2_5G |
+			   RTE_ETH_LINK_SPEED_10G;
 
 	info->max_rx_queues = MRVL_PP2_RXQ_MAX;
 	info->max_tx_queues = MRVL_PP2_TXQ_MAX;
@@ -1742,9 +1742,9 @@ mrvl_dev_infos_get(struct rte_eth_dev *dev,
 	info->tx_offload_capa = MRVL_TX_OFFLOADS;
 	info->tx_queue_offload_capa = MRVL_TX_OFFLOADS;
 
-	info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-				       ETH_RSS_NONFRAG_IPV4_TCP |
-				       ETH_RSS_NONFRAG_IPV4_UDP;
+	info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+				       RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				       RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	/* By default packets are dropped if no descriptors are available */
 	info->default_rxconf.rx_drop_en = 1;
@@ -1873,13 +1873,13 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 	int ret;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		MRVL_LOG(ERR, "VLAN stripping is not supported\n");
 		return -ENOTSUP;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			ret = mrvl_populate_vlan_table(dev, 1);
 		else
 			ret = mrvl_populate_vlan_table(dev, 0);
@@ -1888,7 +1888,7 @@ static int mrvl_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 			return ret;
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
 		MRVL_LOG(ERR, "Extend VLAN not supported\n");
 		return -ENOTSUP;
 	}
@@ -2033,7 +2033,7 @@ mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
 
 	rxq->priv = priv;
 	rxq->mp = mp;
-	rxq->cksum_enabled = offloads & DEV_RX_OFFLOAD_IPV4_CKSUM;
+	rxq->cksum_enabled = offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 	rxq->queue_id = idx;
 	rxq->port_id = dev->data->port_id;
 	mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool;
@@ -2189,7 +2189,7 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		return ret;
 	}
 
-	fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE;
+	fc_conf->mode = en ? RTE_ETH_FC_RX_PAUSE : RTE_ETH_FC_NONE;
 
 	ret = pp2_ppio_get_tx_pause(priv->ppio, &en);
 	if (ret) {
@@ -2198,10 +2198,10 @@ mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	if (en) {
-		if (fc_conf->mode == RTE_FC_NONE)
-			fc_conf->mode = RTE_FC_TX_PAUSE;
+		if (fc_conf->mode == RTE_ETH_FC_NONE)
+			fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		else
-			fc_conf->mode = RTE_FC_FULL;
+			fc_conf->mode = RTE_ETH_FC_FULL;
 	}
 
 	return 0;
@@ -2247,19 +2247,19 @@ mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		rx_en = 1;
 		tx_en = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		rx_en = 0;
 		tx_en = 1;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		rx_en = 1;
 		tx_en = 0;
 		break;
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		rx_en = 0;
 		tx_en = 0;
 		break;
@@ -2336,11 +2336,11 @@ mrvl_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hash_type == PP2_PPIO_HASH_T_NONE)
 		rss_conf->rss_hf = 0;
 	else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE)
-		rss_conf->rss_hf = ETH_RSS_IPV4;
+		rss_conf->rss_hf = RTE_ETH_RSS_IPV4;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 	else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp)
-		rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_conf->rss_hf = RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	return 0;
 }
@@ -3159,7 +3159,7 @@ mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name)
 	eth_dev->dev_ops = &mrvl_ops;
 	eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_eth_dev_probing_finish(eth_dev);
 	return 0;
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index 9e2a40597349..9c4ae80e7e16 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -40,16 +40,16 @@
 #include "hn_nvs.h"
 #include "ndis.h"
 
-#define HN_TX_OFFLOAD_CAPS (DEV_TX_OFFLOAD_IPV4_CKSUM | \
-			    DEV_TX_OFFLOAD_TCP_CKSUM  | \
-			    DEV_TX_OFFLOAD_UDP_CKSUM  | \
-			    DEV_TX_OFFLOAD_TCP_TSO    | \
-			    DEV_TX_OFFLOAD_MULTI_SEGS | \
-			    DEV_TX_OFFLOAD_VLAN_INSERT)
+#define HN_TX_OFFLOAD_CAPS (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | \
+			    RTE_ETH_TX_OFFLOAD_TCP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_UDP_CKSUM  | \
+			    RTE_ETH_TX_OFFLOAD_TCP_TSO    | \
+			    RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
+			    RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 
-#define HN_RX_OFFLOAD_CAPS (DEV_RX_OFFLOAD_CHECKSUM | \
-			    DEV_RX_OFFLOAD_VLAN_STRIP | \
-			    DEV_RX_OFFLOAD_RSS_HASH)
+#define HN_RX_OFFLOAD_CAPS (RTE_ETH_RX_OFFLOAD_CHECKSUM | \
+			    RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			    RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NETVSC_ARG_LATENCY "latency"
 #define NETVSC_ARG_RXBREAK "rx_copybreak"
@@ -238,21 +238,21 @@ hn_dev_link_update(struct rte_eth_dev *dev,
 	hn_rndis_get_linkspeed(hv);
 
 	link = (struct rte_eth_link) {
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_autoneg = ETH_LINK_SPEED_FIXED,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_autoneg = RTE_ETH_LINK_SPEED_FIXED,
 		.link_speed = hv->link_speed / 10000,
 	};
 
 	if (hv->link_status == NDIS_MEDIA_STATE_CONNECTED)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	else
-		link.link_status = ETH_LINK_DOWN;
+		link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (old.link_status == link.link_status)
 		return 0;
 
 	PMD_INIT_LOG(DEBUG, "Port %d is %s", dev->data->port_id,
-		     (link.link_status == ETH_LINK_UP) ? "up" : "down");
+		     (link.link_status == RTE_ETH_LINK_UP) ? "up" : "down");
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -263,14 +263,14 @@ static int hn_dev_info_get(struct rte_eth_dev *dev,
 	struct hn_data *hv = dev->data->dev_private;
 	int rc;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->min_rx_bufsize = HN_MIN_RX_BUF_SIZE;
 	dev_info->max_rx_pktlen  = HN_MAX_XFER_LEN;
 	dev_info->max_mac_addrs  = 1;
 
 	dev_info->hash_key_size = NDIS_HASH_KEYSIZE_TOEPLITZ;
 	dev_info->flow_type_rss_offloads = hv->rss_offloads;
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 
 	dev_info->max_rx_queues = hv->max_queues;
 	dev_info->max_tx_queues = hv->max_queues;
@@ -306,8 +306,8 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -346,8 +346,8 @@ static int hn_rss_reta_query(struct rte_eth_dev *dev,
 	}
 
 	for (i = 0; i < NDIS_HASH_INDCNT; i++) {
-		uint16_t idx = i / RTE_RETA_GROUP_SIZE;
-		uint16_t shift = i % RTE_RETA_GROUP_SIZE;
+		uint16_t idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint16_t shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint64_t mask = (uint64_t)1 << shift;
 
 		if (reta_conf[idx].mask & mask)
@@ -362,17 +362,17 @@ static void hn_rss_hash_init(struct hn_data *hv,
 	/* Convert from DPDK RSS hash flags to NDIS hash flags */
 	hv->rss_hash = NDIS_HASH_FUNCTION_TOEPLITZ;
 
-	if (rss_conf->rss_hf & ETH_RSS_IPV4)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV4)
 		hv->rss_hash |= NDIS_HASH_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV4;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6)
 		hv->rss_hash |=  NDIS_HASH_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_EX)
 		hv->rss_hash |=  NDIS_HASH_IPV6_EX;
-	if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6;
-	if (rss_conf->rss_hf & ETH_RSS_IPV6_TCP_EX)
+	if (rss_conf->rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 		hv->rss_hash |= NDIS_HASH_TCP_IPV6_EX;
 
 	memcpy(hv->rss_key, rss_conf->rss_key ? : rss_default_key,
@@ -427,22 +427,22 @@ static int hn_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	rss_conf->rss_hf = 0;
 	if (hv->rss_hash & NDIS_HASH_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_IPV4;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV4;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV4)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_IPV6;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6;
 
 	if (hv->rss_hash & NDIS_HASH_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_EX;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6)
-		rss_conf->rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_conf->rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (hv->rss_hash & NDIS_HASH_TCP_IPV6_EX)
-		rss_conf->rss_hf |= ETH_RSS_IPV6_TCP_EX;
+		rss_conf->rss_hf |= RTE_ETH_RSS_IPV6_TCP_EX;
 
 	return 0;
 }
@@ -686,8 +686,8 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev_conf->rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev_conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	unsupported = txmode->offloads & ~HN_TX_OFFLOAD_CAPS;
 	if (unsupported) {
@@ -705,7 +705,7 @@ static int hn_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	hv->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	hv->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	err = hn_rndis_conf_offload(hv, txmode->offloads,
 				    rxmode->offloads);
diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c
index e3f7e636d731..cacb30385404 100644
--- a/drivers/net/netvsc/hn_rndis.c
+++ b/drivers/net/netvsc/hn_rndis.c
@@ -710,15 +710,15 @@ hn_rndis_query_rsscaps(struct hn_data *hv,
 
 	hv->rss_offloads = 0;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV4)
-		hv->rss_offloads |= ETH_RSS_IPV4
-			| ETH_RSS_NONFRAG_IPV4_TCP
-			| ETH_RSS_NONFRAG_IPV4_UDP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV4
+			| RTE_ETH_RSS_NONFRAG_IPV4_TCP
+			| RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6)
-		hv->rss_offloads |= ETH_RSS_IPV6
-			| ETH_RSS_NONFRAG_IPV6_TCP;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6
+			| RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 	if (caps.ndis_caps & NDIS_RSS_CAP_IPV6_EX)
-		hv->rss_offloads |= ETH_RSS_IPV6_EX
-			| ETH_RSS_IPV6_TCP_EX;
+		hv->rss_offloads |= RTE_ETH_RSS_IPV6_EX
+			| RTE_ETH_RSS_IPV6_TCP_EX;
 
 	/* Commit! */
 	*rxr_cnt0 = rxr_cnt;
@@ -800,7 +800,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 		params.ndis_hdr.ndis_size = NDIS_OFFLOAD_PARAMS_SIZE;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -812,7 +812,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4)
 		    == NDIS_RXCSUM_CAP_TCP4)
 			params.ndis_tcp4csum |= NDIS_OFFLOAD_PARAM_RX;
@@ -826,7 +826,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4)
 			params.ndis_udp4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
@@ -839,7 +839,7 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (rx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM) {
+	if (rx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4)
 			params.ndis_udp4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
@@ -851,21 +851,21 @@ int hn_rndis_conf_offload(struct hn_data *hv,
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) {
 		if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_IP4)
 		    == NDIS_TXCSUM_CAP_IP4)
 			params.ndis_ip4csum = NDIS_OFFLOAD_PARAM_TX;
 		else
 			goto unsupported;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
 			params.ndis_ip4csum |= NDIS_OFFLOAD_PARAM_RX;
 		else
 			goto unsupported;
 	}
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023)
 			params.ndis_lsov2_ip4 = NDIS_OFFLOAD_LSOV2_ON;
 		else
@@ -907,41 +907,41 @@ int hn_rndis_get_offload(struct hn_data *hv,
 		return error;
 	}
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_IP4)
 	    == HN_NDIS_TXCSUM_CAP_IP4)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & HN_NDIS_TXCSUM_CAP_TCP4)
 	    == HN_NDIS_TXCSUM_CAP_TCP4 &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & HN_NDIS_TXCSUM_CAP_TCP6)
 	    == HN_NDIS_TXCSUM_CAP_TCP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_txcsum & NDIS_TXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_txcsum & NDIS_TXCSUM_CAP_UDP6))
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_UDP_CKSUM;
 
 	if ((hwcaps.ndis_lsov2.ndis_ip4_encap & NDIS_OFFLOAD_ENCAP_8023) &&
 	    (hwcaps.ndis_lsov2.ndis_ip6_opts & HN_NDIS_LSOV2_CAP_IP6)
 	    == HN_NDIS_LSOV2_CAP_IP6)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP |
-				    DEV_RX_OFFLOAD_RSS_HASH;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				    RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_IP4)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_TCP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_TCP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if ((hwcaps.ndis_csum.ndis_ip4_rxcsum & NDIS_RXCSUM_CAP_UDP4) &&
 	    (hwcaps.ndis_csum.ndis_ip6_rxcsum & NDIS_RXCSUM_CAP_UDP6))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_UDP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 
 	return 0;
 }
diff --git a/drivers/net/nfb/nfb_ethdev.c b/drivers/net/nfb/nfb_ethdev.c
index 7e91d5984740..c2ff1c999869 100644
--- a/drivers/net/nfb/nfb_ethdev.c
+++ b/drivers/net/nfb/nfb_ethdev.c
@@ -200,7 +200,7 @@ nfb_eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = dev->data->nb_rx_queues;
 	dev_info->max_tx_queues = dev->data->nb_tx_queues;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -268,26 +268,26 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 
 	status.speed = MAC_SPEED_UNKNOWN;
 
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_status = ETH_LINK_DOWN;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_SPEED_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_SPEED_FIXED;
 
 	if (internals->rxmac[0] != NULL) {
 		nc_rxmac_read_status(internals->rxmac[0], &status);
 
 		switch (status.speed) {
 		case MAC_SPEED_10G:
-			link.link_speed = ETH_SPEED_NUM_10G;
+			link.link_speed = RTE_ETH_SPEED_NUM_10G;
 			break;
 		case MAC_SPEED_40G:
-			link.link_speed = ETH_SPEED_NUM_40G;
+			link.link_speed = RTE_ETH_SPEED_NUM_40G;
 			break;
 		case MAC_SPEED_100G:
-			link.link_speed = ETH_SPEED_NUM_100G;
+			link.link_speed = RTE_ETH_SPEED_NUM_100G;
 			break;
 		default:
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			break;
 		}
 	}
@@ -296,7 +296,7 @@ nfb_eth_link_update(struct rte_eth_dev *dev,
 		nc_rxmac_read_status(internals->rxmac[i], &status);
 
 		if (status.enabled && status.link_up) {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			break;
 		}
 	}
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index d6d4ba9663c6..f19e9834848b 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -42,7 +42,7 @@ nfb_check_timestamp(struct rte_devargs *devargs)
 	}
 	/* Timestamps are enabled when there is
 	 * key-value pair: enable_timestamp=1
-	 * TODO: timestamp should be enabled with DEV_RX_OFFLOAD_TIMESTAMP
+	 * TODO: timestamp should be enabled with RTE_ETH_RX_OFFLOAD_TIMESTAMP
 	 */
 	if (rte_kvargs_process(kvlist, TIMESTAMP_ARG,
 		timestamp_check_handler, NULL) < 0) {
diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c
index 1b4bc33593fb..dff7cfd3d6f9 100644
--- a/drivers/net/nfp/nfp_common.c
+++ b/drivers/net/nfp/nfp_common.c
@@ -160,8 +160,8 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* Checking TX mode */
 	if (txmode->mq_mode) {
@@ -170,7 +170,7 @@ nfp_net_configure(struct rte_eth_dev *dev)
 	}
 
 	/* Checking RX mode */
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS &&
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS &&
 	    !(hw->cap & NFP_NET_CFG_CTRL_RSS)) {
 		PMD_INIT_LOG(INFO, "RSS not supported");
 		return -EINVAL;
@@ -359,20 +359,20 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	rxmode = &dev_conf->rxmode;
 	txmode = &dev_conf->txmode;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_IPV4_CKSUM) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
 			ctrl |= NFP_NET_CFG_CTRL_RXCSUM;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
 			ctrl |= NFP_NET_CFG_CTRL_RXVLAN;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		hw->mtu = rxmode->max_rx_pkt_len;
 
-	if (txmode->offloads & DEV_TX_OFFLOAD_VLAN_INSERT)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT)
 		ctrl |= NFP_NET_CFG_CTRL_TXVLAN;
 
 	/* L2 broadcast */
@@ -384,13 +384,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 		ctrl |= NFP_NET_CFG_CTRL_L2MC;
 
 	/* TX checksum offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    txmode->offloads & DEV_TX_OFFLOAD_TCP_CKSUM)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM)
 		ctrl |= NFP_NET_CFG_CTRL_TXCSUM;
 
 	/* LSO offload */
-	if (txmode->offloads & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		if (hw->cap & NFP_NET_CFG_CTRL_LSO)
 			ctrl |= NFP_NET_CFG_CTRL_LSO;
 		else
@@ -398,7 +398,7 @@ nfp_check_offloads(struct rte_eth_dev *dev)
 	}
 
 	/* RX gather */
-	if (txmode->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		ctrl |= NFP_NET_CFG_CTRL_GATHER;
 
 	return ctrl;
@@ -486,14 +486,14 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	int ret;
 
 	static const uint32_t ls_to_ethtool[] = {
-		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = ETH_SPEED_NUM_NONE,
-		[NFP_NET_CFG_STS_LINK_RATE_1G]          = ETH_SPEED_NUM_1G,
-		[NFP_NET_CFG_STS_LINK_RATE_10G]         = ETH_SPEED_NUM_10G,
-		[NFP_NET_CFG_STS_LINK_RATE_25G]         = ETH_SPEED_NUM_25G,
-		[NFP_NET_CFG_STS_LINK_RATE_40G]         = ETH_SPEED_NUM_40G,
-		[NFP_NET_CFG_STS_LINK_RATE_50G]         = ETH_SPEED_NUM_50G,
-		[NFP_NET_CFG_STS_LINK_RATE_100G]        = ETH_SPEED_NUM_100G,
+		[NFP_NET_CFG_STS_LINK_RATE_UNSUPPORTED] = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_UNKNOWN]     = RTE_ETH_SPEED_NUM_NONE,
+		[NFP_NET_CFG_STS_LINK_RATE_1G]          = RTE_ETH_SPEED_NUM_1G,
+		[NFP_NET_CFG_STS_LINK_RATE_10G]         = RTE_ETH_SPEED_NUM_10G,
+		[NFP_NET_CFG_STS_LINK_RATE_25G]         = RTE_ETH_SPEED_NUM_25G,
+		[NFP_NET_CFG_STS_LINK_RATE_40G]         = RTE_ETH_SPEED_NUM_40G,
+		[NFP_NET_CFG_STS_LINK_RATE_50G]         = RTE_ETH_SPEED_NUM_50G,
+		[NFP_NET_CFG_STS_LINK_RATE_100G]        = RTE_ETH_SPEED_NUM_100G,
 	};
 
 	PMD_DRV_LOG(DEBUG, "Link update");
@@ -505,15 +505,15 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)
 	memset(&link, 0, sizeof(struct rte_eth_link));
 
 	if (nn_link_status & NFP_NET_CFG_STS_LINK)
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	nn_link_status = (nn_link_status >> NFP_NET_CFG_STS_LINK_RATE_SHIFT) &
 			 NFP_NET_CFG_STS_LINK_RATE_MASK;
 
 	if (nn_link_status >= RTE_DIM(ls_to_ethtool))
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	else
 		link.link_speed = ls_to_ethtool[nn_link_status];
 
@@ -702,26 +702,26 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = 1;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXVLAN)
-		dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+		dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_IPV4_CKSUM |
-					     DEV_RX_OFFLOAD_UDP_CKSUM |
-					     DEV_RX_OFFLOAD_TCP_CKSUM;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN)
-		dev_info->tx_offload_capa = DEV_TX_OFFLOAD_VLAN_INSERT;
+		dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_IPV4_CKSUM |
-					     DEV_TX_OFFLOAD_UDP_CKSUM |
-					     DEV_TX_OFFLOAD_TCP_CKSUM;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+					     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_GATHER)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
 		.rx_thresh = {
@@ -758,25 +758,25 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	};
 
 	/* All NFP devices support jumbo frames */
-	dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	if (hw->cap & NFP_NET_CFG_CTRL_RSS) {
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
-		dev_info->flow_type_rss_offloads = ETH_RSS_IPV4 |
-						   ETH_RSS_NONFRAG_IPV4_TCP |
-						   ETH_RSS_NONFRAG_IPV4_UDP |
-						   ETH_RSS_IPV6 |
-						   ETH_RSS_NONFRAG_IPV6_TCP |
-						   ETH_RSS_NONFRAG_IPV6_UDP;
+		dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |
+						   RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+						   RTE_ETH_RSS_IPV6 |
+						   RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+						   RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 		dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;
 		dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;
 	}
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			       ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G |
-			       ETH_LINK_SPEED_50G | ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |
+			       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -847,7 +847,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)
 	if (link.link_status)
 		PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 			    dev->data->port_id, link.link_speed,
-			    link.link_duplex == ETH_LINK_FULL_DUPLEX
+			    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX
 			    ? "full-duplex" : "half-duplex");
 	else
 		PMD_DRV_LOG(INFO, " Port %d: Link Down",
@@ -964,9 +964,9 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)
 
 	/* switch to jumbo mode if needed */
 	if ((uint32_t)mtu > RTE_ETHER_MTU)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->data->dev_conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* update max frame size */
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = (uint32_t)mtu;
@@ -990,12 +990,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	new_ctrl = 0;
 
 	/* Enable vlan strip if it is not configured yet */
-	if ((mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if ((mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    !(hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_RXVLAN;
 
 	/* Disable vlan strip just if it is configured */
-	if (!(mask & ETH_VLAN_STRIP_OFFLOAD) &&
+	if (!(mask & RTE_ETH_VLAN_STRIP_OFFLOAD) &&
 	    (hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN))
 		new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_RXVLAN;
 
@@ -1035,8 +1035,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1116,8 +1116,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,
 	 */
 	for (i = 0; i < reta_size; i += 4) {
 		/* Handling 4 RSS entries per loop */
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF);
 
 		if (!mask)
@@ -1155,22 +1155,22 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,
 
 	rss_hf = rss_conf->rss_hf;
 
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP;
 
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP;
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP;
 
 	cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK;
@@ -1240,22 +1240,22 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,
 	cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL);
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP | RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP)
-		rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6)
-		rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP | ETH_RSS_NONFRAG_IPV6_UDP;
+		rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP | RTE_ETH_RSS_NONFRAG_IPV6_UDP;
 
 	/* Propagate current RSS hash functions to caller */
 	rss_conf->rss_hf = rss_hf;
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 534a38c14f94..7a6a963bf6cc 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -140,7 +140,7 @@ nfp_net_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c
index b697b55865cc..ac960328c7de 100644
--- a/drivers/net/nfp/nfp_ethdev_vf.c
+++ b/drivers/net/nfp/nfp_ethdev_vf.c
@@ -101,7 +101,7 @@ nfp_netvf_start(struct rte_eth_dev *dev)
 	dev_conf = &dev->data->dev_conf;
 	rxmode = &dev_conf->rxmode;
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) {
 		nfp_net_rss_config_default(dev);
 		update |= NFP_NET_CFG_UPDATE_RSS;
 		new_ctrl |= NFP_NET_CFG_CTRL_RSS;
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 3b5c6615adfa..fc76b84b5b66 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -409,7 +409,7 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	dev->data->dev_link.link_status = link_up;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG)
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 		negotiate = true;
 
 	err = hw->mac.get_link_capabilities(hw, &speed, &negotiate);
@@ -418,11 +418,11 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 
 	allowed_speeds = 0;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_1GB_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_1G;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_1G;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_100M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_100M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_100M;
 	if (hw->mac.default_speeds & NGBE_LINK_SPEED_10M_FULL)
-		allowed_speeds |= ETH_LINK_SPEED_10M;
+		allowed_speeds |= RTE_ETH_LINK_SPEED_10M;
 
 	if (*link_speeds & ~allowed_speeds) {
 		PMD_INIT_LOG(ERR, "Invalid link setting");
@@ -430,14 +430,14 @@ ngbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = hw->mac.default_speeds;
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= NGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= NGBE_LINK_SPEED_100M_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_10M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10M)
 			speed |= NGBE_LINK_SPEED_10M_FULL;
 	}
 
@@ -653,8 +653,8 @@ ngbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->rx_desc_lim = rx_desc_lim;
 	dev_info->tx_desc_lim = tx_desc_lim;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_100M |
-				ETH_LINK_SPEED_10M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_100M |
+				RTE_ETH_LINK_SPEED_10M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -682,11 +682,11 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	link.link_autoneg = !(dev->data->dev_conf.link_speeds &
-			~ETH_LINK_SPEED_AUTONEG);
+			~RTE_ETH_LINK_SPEED_AUTONEG);
 
 	hw->mac.get_link_status = true;
 
@@ -699,8 +699,8 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_NONE;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -708,27 +708,27 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
 		return rte_eth_linkstatus_set(dev, &link);
 
 	intr->flags &= ~NGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case NGBE_LINK_SPEED_UNKNOWN:
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 		break;
 
 	case NGBE_LINK_SPEED_10M_FULL:
-		link.link_speed = ETH_SPEED_NUM_10M;
+		link.link_speed = RTE_ETH_SPEED_NUM_10M;
 		lan_speed = 0;
 		break;
 
 	case NGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		lan_speed = 1;
 		break;
 
 	case NGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		lan_speed = 2;
 		break;
 	}
@@ -912,11 +912,11 @@ ngbe_dev_link_status_print(struct rte_eth_dev *dev)
 
 	rte_eth_linkstatus_get(dev, &link);
 
-	if (link.link_status == ETH_LINK_UP) {
+	if (link.link_status == RTE_ETH_LINK_UP) {
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -956,7 +956,7 @@ ngbe_dev_interrupt_action(struct rte_eth_dev *dev)
 		ngbe_dev_link_update(dev, 0);
 
 		/* likely to up */
-		if (link.link_status != ETH_LINK_UP)
+		if (link.link_status != RTE_ETH_LINK_UP)
 			/* handle it 1 sec later, wait it being stable */
 			timeout = NGBE_LINK_UP_CHECK_TIMEOUT;
 		/* likely to down */
diff --git a/drivers/net/null/rte_eth_null.c b/drivers/net/null/rte_eth_null.c
index 508bafc12a14..df4ddb3b40e2 100644
--- a/drivers/net/null/rte_eth_null.c
+++ b/drivers/net/null/rte_eth_null.c
@@ -61,16 +61,16 @@ struct pmd_internals {
 	rte_spinlock_t rss_lock;
 
 	uint16_t reta_size;
-	struct rte_eth_rss_reta_entry64 reta_conf[ETH_RSS_RETA_SIZE_128 /
-			RTE_RETA_GROUP_SIZE];
+	struct rte_eth_rss_reta_entry64 reta_conf[RTE_ETH_RSS_RETA_SIZE_128 /
+			RTE_ETH_RETA_GROUP_SIZE];
 
 	uint8_t rss_key[40];                /**< 40-byte hash key. */
 };
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_null_logtype, NOTICE);
@@ -189,7 +189,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return -EINVAL;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -199,7 +199,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	if (dev == NULL)
 		return 0;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
@@ -381,9 +381,9 @@ eth_rss_reta_update(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
 		internal->reta_conf[i].mask = reta_conf[i].mask;
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				internal->reta_conf[i].reta[j] = reta_conf[i].reta[j];
 	}
@@ -406,8 +406,8 @@ eth_rss_reta_query(struct rte_eth_dev *dev,
 	rte_spinlock_lock(&internal->rss_lock);
 
 	/* Copy RETA table */
-	for (i = 0; i < (internal->reta_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (internal->reta_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = internal->reta_conf[i].reta[j];
 	}
@@ -538,8 +538,8 @@ eth_dev_null_create(struct rte_vdev_device *dev, struct pmd_options *args)
 	internals->port_id = eth_dev->data->port_id;
 	rte_eth_random_addr(internals->eth_addr.addr_bytes);
 
-	internals->flow_type_rss_offloads =  ETH_RSS_PROTO_MASK;
-	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_RETA_GROUP_SIZE;
+	internals->flow_type_rss_offloads =  RTE_ETH_RSS_PROTO_MASK;
+	internals->reta_size = RTE_DIM(internals->reta_conf) * RTE_ETH_RETA_GROUP_SIZE;
 
 	rte_memcpy(internals->rss_key, default_rss_key, 40);
 
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 9f4c0503b4d4..947dabdca2c5 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -158,7 +158,7 @@ octeontx_link_status_print(struct rte_eth_dev *eth_dev,
 		octeontx_log_info("Port %u: Link Up - speed %u Mbps - %s",
 			  (eth_dev->data->port_id),
 			  link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		octeontx_log_info("Port %d: Link Down",
@@ -171,38 +171,38 @@ octeontx_link_status_update(struct octeontx_nic *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	switch (nic->speed) {
 	case OCTEONTX_LINK_SPEED_SGMII:
-		link->link_speed = ETH_SPEED_NUM_1G;
+		link->link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_XAUI:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RXAUI:
 	case OCTEONTX_LINK_SPEED_10G_R:
-		link->link_speed = ETH_SPEED_NUM_10G;
+		link->link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	case OCTEONTX_LINK_SPEED_QSGMII:
-		link->link_speed = ETH_SPEED_NUM_5G;
+		link->link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 	case OCTEONTX_LINK_SPEED_40G_R:
-		link->link_speed = ETH_SPEED_NUM_40G;
+		link->link_speed = RTE_ETH_SPEED_NUM_40G;
 		break;
 
 	case OCTEONTX_LINK_SPEED_RESERVE1:
 	case OCTEONTX_LINK_SPEED_RESERVE2:
 	default:
-		link->link_speed = ETH_SPEED_NUM_NONE;
+		link->link_speed = RTE_ETH_SPEED_NUM_NONE;
 		octeontx_log_err("incorrect link speed %d", nic->speed);
 		break;
 	}
 
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -355,20 +355,20 @@ octeontx_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    nic->tx_offloads & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    nic->tx_offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= OCCTX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(nic->tx_offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= OCCTX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (nic->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (nic->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= OCCTX_TX_MULTI_SEG_F;
 
 	return flags;
@@ -380,21 +380,21 @@ octeontx_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct octeontx_nic *nic = octeontx_pmd_priv(eth_dev);
 	uint16_t flags = 0;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (nic->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= OCCTX_RX_OFFLOAD_CSUM_F;
 
-	if (nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		flags |= OCCTX_RX_MULTI_SEG_F;
 		eth_dev->data->scattered_rx = 1;
 		/* If scatter mode is enabled, TX should also be in multi
 		 * seg mode, else memory leak will occur
 		 */
-		nic->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		nic->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	return flags;
@@ -423,18 +423,18 @@ octeontx_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		octeontx_log_err("unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (!(txmode->offloads & DEV_TX_OFFLOAD_MT_LOCKFREE)) {
+	if (!(txmode->offloads & RTE_ETH_TX_OFFLOAD_MT_LOCKFREE)) {
 		PMD_INIT_LOG(NOTICE, "cant disable lockfree tx");
-		txmode->offloads |= DEV_TX_OFFLOAD_MT_LOCKFREE;
+		txmode->offloads |= RTE_ETH_TX_OFFLOAD_MT_LOCKFREE;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		octeontx_log_err("setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -534,13 +534,13 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		octeontx_log_err("Scatter mode is disabled");
 		return -EINVAL;
 	}
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((nic->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((nic->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * OCCTX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -553,9 +553,9 @@ octeontx_dev_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 		return rc;
 
 	if (frame_size > OCCTX_L2_MAX_LEN)
-		nic->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		nic->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* Update max_rx_pkt_len */
 	data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -582,7 +582,7 @@ octeontx_recheck_rx_offloads(struct octeontx_rxq *rxq)
 
 	/* Setup scatter mode if needed by jumbo */
 	if (data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
-		nic->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
+		nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		nic->rx_offload_flags |= octeontx_rx_offload_flags(eth_dev);
 		nic->tx_offload_flags |= octeontx_tx_offload_flags(eth_dev);
 	}
@@ -854,10 +854,10 @@ octeontx_dev_info(struct rte_eth_dev *dev,
 	struct octeontx_nic *nic = octeontx_pmd_priv(dev);
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-			ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_40G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+			RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_40G;
 
 	/* Min/Max MTU supported */
 	dev_info->min_rx_bufsize = OCCTX_MIN_FRS;
@@ -1369,7 +1369,7 @@ octeontx_create(struct rte_vdev_device *dev, int port, uint8_t evdev,
 	nic->ev_ports = 1;
 	nic->print_flag = -1;
 
-	data->dev_link.link_status = ETH_LINK_DOWN;
+	data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	data->dev_started = 0;
 	data->promiscuous = 0;
 	data->all_multicast = 0;
diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
index b73515de37ca..7215039507c3 100644
--- a/drivers/net/octeontx/octeontx_ethdev.h
+++ b/drivers/net/octeontx/octeontx_ethdev.h
@@ -55,24 +55,24 @@
 #define OCCTX_MAX_MTU		(OCCTX_MAX_FRS - OCCTX_L2_OVERHEAD)
 
 #define OCTEONTX_RX_OFFLOADS		(				   \
-					 DEV_RX_OFFLOAD_CHECKSUM	 | \
-					 DEV_RX_OFFLOAD_SCTP_CKSUM       | \
-					 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_RX_OFFLOAD_SCATTER	         | \
-					 DEV_RX_OFFLOAD_SCATTER		 | \
-					 DEV_RX_OFFLOAD_JUMBO_FRAME	 | \
-					 DEV_RX_OFFLOAD_VLAN_FILTER)
+					 RTE_ETH_RX_OFFLOAD_CHECKSUM	 | \
+					 RTE_ETH_RX_OFFLOAD_SCTP_CKSUM       | \
+					 RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER	         | \
+					 RTE_ETH_RX_OFFLOAD_SCATTER		 | \
+					 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME	 | \
+					 RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 
 #define OCTEONTX_TX_OFFLOADS		(				   \
-					 DEV_TX_OFFLOAD_MBUF_FAST_FREE	 | \
-					 DEV_TX_OFFLOAD_MT_LOCKFREE	 | \
-					 DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-					 DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_IPV4_CKSUM	 | \
-					 DEV_TX_OFFLOAD_TCP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_UDP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_SCTP_CKSUM	 | \
-					 DEV_TX_OFFLOAD_MULTI_SEGS)
+					 RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	 | \
+					 RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	 | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+					 RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_TCP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_UDP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	 | \
+					 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 static inline struct octeontx_nic *
 octeontx_pmd_priv(struct rte_eth_dev *dev)
diff --git a/drivers/net/octeontx/octeontx_ethdev_ops.c b/drivers/net/octeontx/octeontx_ethdev_ops.c
index dbe13ce3826b..6ec2b71b0672 100644
--- a/drivers/net/octeontx/octeontx_ethdev_ops.c
+++ b/drivers/net/octeontx/octeontx_ethdev_ops.c
@@ -43,20 +43,20 @@ octeontx_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			rc = octeontx_vlan_hw_filter(nic, true);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags |= OCCTX_RX_VLAN_FLTR_F;
 		} else {
 			rc = octeontx_vlan_hw_filter(nic, false);
 			if (rc)
 				goto done;
 
-			nic->rx_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			nic->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			nic->rx_offload_flags &= ~OCCTX_RX_VLAN_FLTR_F;
 		}
 	}
@@ -139,7 +139,7 @@ octeontx_dev_vlan_offload_init(struct rte_eth_dev *dev)
 
 	TAILQ_INIT(&nic->vlan_info.fltr_tbl);
 
-	rc = octeontx_dev_vlan_offload_set(dev, ETH_VLAN_FILTER_MASK);
+	rc = octeontx_dev_vlan_offload_set(dev, RTE_ETH_VLAN_FILTER_MASK);
 	if (rc)
 		octeontx_log_err("Failed to set vlan offload rc=%d", rc);
 
@@ -219,13 +219,13 @@ octeontx_dev_flow_ctrl_get(struct rte_eth_dev *dev,
 		return rc;
 
 	if (conf.rx_pause && conf.tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (conf.rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (conf.tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	/* low_water & high_water values are in Bytes */
 	fc_conf->low_water = conf.low_water;
@@ -272,10 +272,10 @@ octeontx_dev_flow_ctrl_set(struct rte_eth_dev *dev,
 		return -EINVAL;
 	}
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-			(fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+			(fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	conf.high_water = fc_conf->high_water;
 	conf.low_water = fc_conf->low_water;
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
index 75d4cabf2e7c..ebe503438144 100644
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ b/drivers/net/octeontx2/otx2_ethdev.c
@@ -21,7 +21,7 @@ nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
 
 	if (otx2_dev_is_vf(dev) ||
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
-		capa &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+		capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
 	return capa;
 }
@@ -33,10 +33,10 @@ nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
 
 	/* TSO not supported for earlier chip revisions */
 	if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
-		capa &= ~(DEV_TX_OFFLOAD_TCP_TSO |
-			  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			  DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-			  DEV_TX_OFFLOAD_GRE_TNL_TSO);
+		capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+			  RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
 	return capa;
 }
 
@@ -66,8 +66,8 @@ nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
 	req->npa_func = otx2_npa_pf_func_get();
 	req->sso_func = otx2_sso_pf_func_get();
 	req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM)) {
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
 		req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
 		req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
 	}
@@ -373,7 +373,7 @@ nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
 
 	aq->rq.sso_ena = 0;
 
-	if (rxq->offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		aq->rq.ipsech_ena = 1;
 
 	aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
@@ -664,7 +664,7 @@ otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
 	 * These are needed in deriving raw clock value from tsc counter.
 	 * read_clock eth op returns raw clock value.
 	 */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev)) {
 		rc = otx2_nix_raw_clock_tsc_conv(dev);
 		if (rc) {
@@ -691,7 +691,7 @@ nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
 	 * Maximum three segments can be supported with W8, Choose
 	 * NIX_MAXSQESZ_W16 for multi segment offload.
 	 */
-	if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		return NIX_MAXSQESZ_W16;
 	else
 		return NIX_MAXSQESZ_W8;
@@ -706,29 +706,29 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
 	struct rte_eth_rxmode *rxmode = &conf->rxmode;
 	uint16_t flags = 0;
 
-	if (rxmode->mq_mode == ETH_MQ_RX_RSS &&
-			(dev->rx_offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
+			(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		flags |= NIX_RX_OFFLOAD_RSS_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_TCP_CKSUM |
-			 DEV_RX_OFFLOAD_UDP_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			 RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_IPV4_CKSUM |
-				DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+				RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
 		flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		flags |= NIX_RX_MULTI_SEG_F;
 
-	if (dev->rx_offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-				DEV_RX_OFFLOAD_QINQ_STRIP))
+	if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+				RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
 		flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
 		flags |= NIX_RX_OFFLOAD_SECURITY_F;
 
 	if (!dev->ptype_disable)
@@ -767,43 +767,43 @@ nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
 			 offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
 
-	if (conf & DEV_TX_OFFLOAD_VLAN_INSERT ||
-	    conf & DEV_TX_OFFLOAD_QINQ_INSERT)
+	if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
+	    conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
 		flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
 
-	if (conf & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
 
-	if (conf & DEV_TX_OFFLOAD_IPV4_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_TCP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_UDP_CKSUM ||
-	    conf & DEV_TX_OFFLOAD_SCTP_CKSUM)
+	if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
+	    conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
 		flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
 
-	if (!(conf & DEV_TX_OFFLOAD_MBUF_FAST_FREE))
+	if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
 		flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
 
-	if (conf & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		flags |= NIX_TX_MULTI_SEG_F;
 
 	/* Enable Inner checksum for TSO */
-	if (conf & DEV_TX_OFFLOAD_TCP_TSO)
+	if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
 	/* Enable Inner and Outer checksum for Tunnel TSO */
-	if (conf & (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-		    DEV_TX_OFFLOAD_GENEVE_TNL_TSO |
-		    DEV_TX_OFFLOAD_GRE_TNL_TSO))
+	if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
+		    RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
 		flags |= (NIX_TX_OFFLOAD_TSO_F |
 			  NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
 			  NIX_TX_OFFLOAD_L3_L4_CSUM_F);
 
-	if (conf & DEV_TX_OFFLOAD_SECURITY)
+	if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
 		flags |= NIX_TX_OFFLOAD_SECURITY_F;
 
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP))
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
 		flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
 	return flags;
@@ -913,8 +913,8 @@ otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
 	buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
 
 	if (eth_dev->data->dev_conf.rxmode.max_rx_pkt_len > buffsz) {
-		dev->rx_offloads |= DEV_RX_OFFLOAD_SCATTER;
-		dev->tx_offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 		/* Setting up the rx[tx]_offload_flags due to change
 		 * in rx[tx]_offloads.
@@ -1857,21 +1857,21 @@ otx2_nix_configure(struct rte_eth_dev *eth_dev)
 		goto fail_configure;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-	    rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+	    rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
 		goto fail_configure;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
 		goto fail_configure;
 	}
 
 	if (otx2_dev_is_Ax(dev) &&
-	    (txmode->offloads & DEV_TX_OFFLOAD_SCTP_CKSUM) &&
-	    ((txmode->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
-	    (txmode->offloads & DEV_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
+	    ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
+	    (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
 		otx2_err("Outer IP and SCTP checksum unsupported");
 		goto fail_configure;
 	}
@@ -2244,7 +2244,7 @@ otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
 	 * enabled in PF owning this VF
 	 */
 	memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP) ||
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
 	    otx2_ethdev_is_ptp_en(dev))
 		otx2_nix_timesync_enable(eth_dev);
 	else
@@ -2573,8 +2573,8 @@ otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
 	rc = otx2_eth_sec_ctx_create(eth_dev);
 	if (rc)
 		goto free_mac_addrs;
-	dev->tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
-	dev->rx_offload_capa |= DEV_RX_OFFLOAD_SECURITY;
+	dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
+	dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
 
 	/* Initialize rte-flow */
 	rc = otx2_flow_init(dev);
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
index 7871e3d30bda..04e43b63c192 100644
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ b/drivers/net/octeontx2/otx2_ethdev.h
@@ -117,44 +117,44 @@
 #define CQ_TIMER_THRESH_DEFAULT	0xAULL /* ~1usec i.e (0xA * 100nsec) */
 #define CQ_TIMER_THRESH_MAX     255
 
-#define NIX_RSS_L3_L4_SRC_DST  (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY \
-				| ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)
+#define NIX_RSS_L3_L4_SRC_DST  (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
+				| RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
 
-#define NIX_RSS_OFFLOAD		(ETH_RSS_PORT | ETH_RSS_IP | ETH_RSS_UDP |\
-				 ETH_RSS_TCP | ETH_RSS_SCTP | \
-				 ETH_RSS_TUNNEL | ETH_RSS_L2_PAYLOAD | \
-				 NIX_RSS_L3_L4_SRC_DST | ETH_RSS_LEVEL_MASK | \
-				 ETH_RSS_C_VLAN)
+#define NIX_RSS_OFFLOAD		(RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
+				 RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
+				 RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
+				 NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
+				 RTE_ETH_RSS_C_VLAN)
 
 #define NIX_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE	| \
-	DEV_TX_OFFLOAD_MT_LOCKFREE	| \
-	DEV_TX_OFFLOAD_VLAN_INSERT	| \
-	DEV_TX_OFFLOAD_QINQ_INSERT	| \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
-	DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_CKSUM	| \
-	DEV_TX_OFFLOAD_UDP_CKSUM	| \
-	DEV_TX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_TX_OFFLOAD_TCP_TSO		| \
-	DEV_TX_OFFLOAD_VXLAN_TNL_TSO    | \
-	DEV_TX_OFFLOAD_GENEVE_TNL_TSO   | \
-	DEV_TX_OFFLOAD_GRE_TNL_TSO	| \
-	DEV_TX_OFFLOAD_MULTI_SEGS	| \
-	DEV_TX_OFFLOAD_IPV4_CKSUM)
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	| \
+	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE	| \
+	RTE_ETH_TX_OFFLOAD_VLAN_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_QINQ_INSERT	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_TX_OFFLOAD_TCP_TSO		| \
+	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    | \
+	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   | \
+	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	| \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS	| \
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 
 #define NIX_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM		| \
-	DEV_RX_OFFLOAD_SCTP_CKSUM	| \
-	DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_RX_OFFLOAD_SCATTER		| \
-	DEV_RX_OFFLOAD_JUMBO_FRAME	| \
-	DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
-	DEV_RX_OFFLOAD_VLAN_STRIP	| \
-	DEV_RX_OFFLOAD_VLAN_FILTER	| \
-	DEV_RX_OFFLOAD_QINQ_STRIP	| \
-	DEV_RX_OFFLOAD_TIMESTAMP	| \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM		| \
+	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_RX_OFFLOAD_SCATTER		| \
+	RTE_ETH_RX_OFFLOAD_JUMBO_FRAME	| \
+	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_VLAN_FILTER	| \
+	RTE_ETH_RX_OFFLOAD_QINQ_STRIP	| \
+	RTE_ETH_RX_OFFLOAD_TIMESTAMP	| \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NIX_DEFAULT_RSS_CTX_GROUP  0
 #define NIX_DEFAULT_RSS_MCAM_IDX  -1
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
index 83f905315b38..60bf6c3f5f05 100644
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ b/drivers/net/octeontx2/otx2_ethdev_devargs.c
@@ -49,12 +49,12 @@ parse_reta_size(const char *key, const char *value, void *extra_args)
 
 	val = atoi(value);
 
-	if (val <= ETH_RSS_RETA_SIZE_64)
-		val = ETH_RSS_RETA_SIZE_64;
-	else if (val > ETH_RSS_RETA_SIZE_64 && val <= ETH_RSS_RETA_SIZE_128)
-		val = ETH_RSS_RETA_SIZE_128;
-	else if (val > ETH_RSS_RETA_SIZE_128 && val <= ETH_RSS_RETA_SIZE_256)
-		val = ETH_RSS_RETA_SIZE_256;
+	if (val <= RTE_ETH_RSS_RETA_SIZE_64)
+		val = RTE_ETH_RSS_RETA_SIZE_64;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
+		val = RTE_ETH_RSS_RETA_SIZE_128;
+	else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
+		val = RTE_ETH_RSS_RETA_SIZE_256;
 	else
 		val = NIX_RSS_RETA_SIZE;
 
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
index 5a4501208e9e..41761085e156 100644
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ b/drivers/net/octeontx2/otx2_ethdev_ops.c
@@ -29,11 +29,11 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 	 * when this feature has not been enabled before.
 	 */
 	if (data->dev_started && frame_size > buffsz &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER))
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
 		return -EINVAL;
 
 	/* Check <seg size> * <max_seg>  >= max_frame */
-	if ((dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)	&&
+	if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	&&
 	    (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
 		return -EINVAL;
 
@@ -59,9 +59,9 @@ otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
 		return rc;
 
 	if (frame_size > NIX_L2_MAX_LEN)
-		dev->rx_offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		dev->rx_offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* Update max_rx_pkt_len */
 	data->dev_conf.rxmode.max_rx_pkt_len = frame_size;
@@ -590,17 +590,17 @@ otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
 	};
 
 	/* Auto negotiation disabled */
-	devinfo->speed_capa = ETH_LINK_SPEED_FIXED;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
-		devinfo->speed_capa |= ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G |
-			ETH_LINK_SPEED_25G | ETH_LINK_SPEED_40G;
+		devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
+			RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
 
 		/* 50G and 100G to be supported for board version C0
 		 * and above.
 		 */
 		if (!otx2_dev_is_Ax(dev))
-			devinfo->speed_capa |= ETH_LINK_SPEED_50G |
-					       ETH_LINK_SPEED_100G;
+			devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
+					       RTE_ETH_LINK_SPEED_100G;
 	}
 
 	devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
index c2a36883cbf2..e1654ef5b284 100644
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ b/drivers/net/octeontx2/otx2_ethdev_sec.c
@@ -890,8 +890,8 @@ otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
 	RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
 			 !RTE_IS_POWER_OF_2(sa_width));
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return 0;
 
 	if (rte_security_dynfield_register() < 0)
@@ -933,8 +933,8 @@ otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
 	uint16_t port = eth_dev->data->port_id;
 	char name[RTE_MEMZONE_NAMESIZE];
 
-	if (!(dev->tx_offloads & DEV_TX_OFFLOAD_SECURITY) &&
-	    !(dev->rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
+	    !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	lookup_mem_sa_tbl_clear(eth_dev);
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
index 6df0732189eb..1d0fe4e950d4 100644
--- a/drivers/net/octeontx2/otx2_flow.c
+++ b/drivers/net/octeontx2/otx2_flow.c
@@ -625,7 +625,7 @@ otx2_flow_create(struct rte_eth_dev *dev,
 		goto err_exit;
 	}
 
-	if (hw->rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		rc = flow_update_sec_tt(dev, actions);
 		if (rc != 0) {
 			rte_flow_error_set(error, EIO,
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
index 76bf48100183..071740de86a7 100644
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ b/drivers/net/octeontx2/otx2_flow_ctrl.c
@@ -54,7 +54,7 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 	int rc;
 
 	if (otx2_dev_is_lbk(dev)) {
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		return 0;
 	}
 
@@ -66,13 +66,13 @@ otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 		goto done;
 
 	if (rsp->rx_pause && rsp->tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rsp->rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (rsp->tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 done:
 	return rc;
@@ -159,10 +159,10 @@ otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	if (fc_conf->mode == fc->mode)
 		return 0;
 
-	rx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_RX_PAUSE);
-	tx_pause = (fc_conf->mode == RTE_FC_FULL) ||
-		    (fc_conf->mode == RTE_FC_TX_PAUSE);
+	rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
+	tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
+		    (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
 
 	/* Check if TX pause frame is already enabled or not */
 	if (fc->tx_pause ^ tx_pause) {
@@ -212,11 +212,11 @@ otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
 	/* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
 	if (otx2_dev_is_Ax(dev) &&
 	    (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
-	    (fc_conf.mode == RTE_FC_FULL || fc_conf.mode == RTE_FC_RX_PAUSE)) {
+	    (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
 		fc_conf.mode =
-				(fc_conf.mode == RTE_FC_FULL ||
-				fc_conf.mode == RTE_FC_TX_PAUSE) ?
-				RTE_FC_TX_PAUSE : RTE_FC_NONE;
+				(fc_conf.mode == RTE_ETH_FC_FULL ||
+				fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
+				RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
 	}
 
 	return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
@@ -234,7 +234,7 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		return 0;
 
 	memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
-	/* Both Rx & Tx flow ctrl get enabled(RTE_FC_FULL) in HW
+	/* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
 	 * by AF driver, update those info in PMD structure.
 	 */
 	rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
@@ -242,10 +242,10 @@ otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
 		goto exit;
 
 	fc->mode = fc_conf.mode;
-	fc->rx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_RX_PAUSE);
-	fc->tx_pause = (fc_conf.mode == RTE_FC_FULL) ||
-			(fc_conf.mode == RTE_FC_TX_PAUSE);
+	fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
+	fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
+			(fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
 
 exit:
 	return rc;
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
index 63a33142a579..3fe6727f1d2a 100644
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ b/drivers/net/octeontx2/otx2_flow_parse.c
@@ -852,7 +852,7 @@ parse_rss_action(struct rte_eth_dev *dev,
 					  attr, "No support of RSS in egress");
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS)
+	if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
 		return rte_flow_error_set(error, ENOTSUP,
 					  RTE_FLOW_ERROR_TYPE_ACTION,
 					  act, "multi-queue mode is disabled");
@@ -1188,7 +1188,7 @@ otx2_flow_parse_actions(struct rte_eth_dev *dev,
 		 *FLOW_KEY_ALG index. So, till we update the action with
 		 *flow_key_alg index, set the action to drop.
 		 */
-		if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+		if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 			flow->npc_action = NIX_RX_ACTIONOP_DROP;
 		else
 			flow->npc_action = NIX_RX_ACTIONOP_UCAST;
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
index 81dd6243b977..8f5d0eed92b6 100644
--- a/drivers/net/octeontx2/otx2_link.c
+++ b/drivers/net/octeontx2/otx2_link.c
@@ -41,7 +41,7 @@ nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
 		otx2_info("Port %d: Link Up - speed %u Mbps - %s",
 			  (int)(eth_dev->data->port_id),
 			  (uint32_t)link->link_speed,
-			  link->link_duplex == ETH_LINK_FULL_DUPLEX ?
+			  link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 			  "full-duplex" : "half-duplex");
 	else
 		otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
@@ -92,7 +92,7 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 
 	eth_link.link_status = link->link_up;
 	eth_link.link_speed = link->speed;
-	eth_link.link_autoneg = ETH_LINK_AUTONEG;
+	eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 	eth_link.link_duplex = link->full_duplex;
 
 	otx2_dev->speed = link->speed;
@@ -111,10 +111,10 @@ otx2_eth_dev_link_status_update(struct otx2_dev *dev,
 static int
 lbk_link_update(struct rte_eth_link *link)
 {
-	link->link_status = ETH_LINK_UP;
-	link->link_speed = ETH_SPEED_NUM_100G;
-	link->link_autoneg = ETH_LINK_FIXED;
-	link->link_duplex = ETH_LINK_FULL_DUPLEX;
+	link->link_status = RTE_ETH_LINK_UP;
+	link->link_speed = RTE_ETH_SPEED_NUM_100G;
+	link->link_autoneg = RTE_ETH_LINK_FIXED;
+	link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	return 0;
 }
 
@@ -131,7 +131,7 @@ cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
 
 	link->link_status = rsp->link_info.link_up;
 	link->link_speed = rsp->link_info.speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (rsp->link_info.full_duplex)
 		link->link_duplex = rsp->link_info.full_duplex;
@@ -233,22 +233,22 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 
 	/* 50G and 100G to be supported for board version C0 and above */
 	if (!otx2_dev_is_Ax(dev)) {
-		if (link_speeds & ETH_LINK_SPEED_100G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_100G)
 			link_speed = 100000;
-		if (link_speeds & ETH_LINK_SPEED_50G)
+		if (link_speeds & RTE_ETH_LINK_SPEED_50G)
 			link_speed = 50000;
 	}
-	if (link_speeds & ETH_LINK_SPEED_40G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_40G)
 		link_speed = 40000;
-	if (link_speeds & ETH_LINK_SPEED_25G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_25G)
 		link_speed = 25000;
-	if (link_speeds & ETH_LINK_SPEED_20G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_20G)
 		link_speed = 20000;
-	if (link_speeds & ETH_LINK_SPEED_10G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_10G)
 		link_speed = 10000;
-	if (link_speeds & ETH_LINK_SPEED_5G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_5G)
 		link_speed = 5000;
-	if (link_speeds & ETH_LINK_SPEED_1G)
+	if (link_speeds & RTE_ETH_LINK_SPEED_1G)
 		link_speed = 1000;
 
 	return link_speed;
@@ -257,11 +257,11 @@ nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
 static inline uint8_t
 nix_parse_eth_link_duplex(uint32_t link_speeds)
 {
-	if ((link_speeds & ETH_LINK_SPEED_10M_HD) ||
-			(link_speeds & ETH_LINK_SPEED_100M_HD))
-		return ETH_LINK_HALF_DUPLEX;
+	if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
+			(link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
+		return RTE_ETH_LINK_HALF_DUPLEX;
 	else
-		return ETH_LINK_FULL_DUPLEX;
+		return RTE_ETH_LINK_FULL_DUPLEX;
 }
 
 int
@@ -279,7 +279,7 @@ otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
 	cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
 	if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
 		cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
-		cfg.an = (conf->link_speeds & ETH_LINK_SPEED_FIXED) == 0;
+		cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
 
 		return cgx_change_mode(dev, &cfg);
 	}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
index f84aa1bf570c..b9c63ad3bc21 100644
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ b/drivers/net/octeontx2/otx2_mcast.c
@@ -100,7 +100,7 @@ nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
 
 		action = NIX_RX_ACTIONOP_UCAST;
 
-		if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+		if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 			action = NIX_RX_ACTIONOP_RSS;
 			action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 		}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
index 91e5c0f6bd11..abb213058792 100644
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ b/drivers/net/octeontx2/otx2_ptp.c
@@ -250,7 +250,7 @@ otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
 	/* System time should be already on by default */
 	nix_start_timecounters(eth_dev);
 
-	dev->rx_offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
 
@@ -287,7 +287,7 @@ otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
 	if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
 		return -EINVAL;
 
-	dev->rx_offloads &= ~DEV_RX_OFFLOAD_TIMESTAMP;
+	dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 	dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
 	dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
 
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
index 7dbe5f69ae65..68cef1caa394 100644
--- a/drivers/net/octeontx2/otx2_rss.c
+++ b/drivers/net/octeontx2/otx2_rss.c
@@ -85,8 +85,8 @@ otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++) {
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
 			if ((reta_conf[i].mask >> j) & 0x01)
 				rss->ind_tbl[idx] = reta_conf[i].reta[j];
 			idx++;
@@ -118,8 +118,8 @@ otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Copy RETA table */
-	for (i = 0; i < (dev->rss_info.rss_size / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = rss->ind_tbl[j];
 	}
@@ -178,23 +178,23 @@ rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
 }
 
 #define RSS_IPV4_ENABLE ( \
-			  ETH_RSS_IPV4 | \
-			  ETH_RSS_FRAG_IPV4 | \
-			  ETH_RSS_NONFRAG_IPV4_UDP | \
-			  ETH_RSS_NONFRAG_IPV4_TCP | \
-			  ETH_RSS_NONFRAG_IPV4_SCTP)
+			  RTE_ETH_RSS_IPV4 | \
+			  RTE_ETH_RSS_FRAG_IPV4 | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
 
 #define RSS_IPV6_ENABLE ( \
-			  ETH_RSS_IPV6 | \
-			  ETH_RSS_FRAG_IPV6 | \
-			  ETH_RSS_NONFRAG_IPV6_UDP | \
-			  ETH_RSS_NONFRAG_IPV6_TCP | \
-			  ETH_RSS_NONFRAG_IPV6_SCTP)
+			  RTE_ETH_RSS_IPV6 | \
+			  RTE_ETH_RSS_FRAG_IPV6 | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+			  RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
 
 #define RSS_IPV6_EX_ENABLE ( \
-			     ETH_RSS_IPV6_EX | \
-			     ETH_RSS_IPV6_TCP_EX | \
-			     ETH_RSS_IPV6_UDP_EX)
+			     RTE_ETH_RSS_IPV6_EX | \
+			     RTE_ETH_RSS_IPV6_TCP_EX | \
+			     RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define RSS_MAX_LEVELS   3
 
@@ -233,24 +233,24 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 
 	dev->rss_info.nix_rss = ethdev_rss;
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD &&
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
 	    dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
 		flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
 	}
 
-	if (ethdev_rss & ETH_RSS_C_VLAN)
+	if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
 
-	if (ethdev_rss & ETH_RSS_L3_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
 
-	if (ethdev_rss & ETH_RSS_L3_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
 
-	if (ethdev_rss & ETH_RSS_L4_SRC_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
 
-	if (ethdev_rss & ETH_RSS_L4_DST_ONLY)
+	if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
 		flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
 
 	if (ethdev_rss & RSS_IPV4_ENABLE)
@@ -259,34 +259,34 @@ otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
 	if (ethdev_rss & RSS_IPV6_ENABLE)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
 
-	if (ethdev_rss & ETH_RSS_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_TCP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_UDP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_SCTP)
+	if (ethdev_rss & RTE_ETH_RSS_SCTP)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
 
-	if (ethdev_rss & ETH_RSS_L2_PAYLOAD)
+	if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
 		flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
 
 	if (ethdev_rss & RSS_IPV6_EX_ENABLE)
 		flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		flowkey_cfg |= FLOW_KEY_TYPE_PORT;
 
-	if (ethdev_rss & ETH_RSS_NVGRE)
+	if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 		flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
 
-	if (ethdev_rss & ETH_RSS_VXLAN)
+	if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 		flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
 
-	if (ethdev_rss & ETH_RSS_GENEVE)
+	if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 		flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
 
-	if (ethdev_rss & ETH_RSS_GTPU)
+	if (ethdev_rss & RTE_ETH_RSS_GTPU)
 		flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
 
 	return flowkey_cfg;
@@ -343,7 +343,7 @@ otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
 		otx2_nix_rss_set_key(dev, rss_conf->rss_key,
 				     (uint32_t)rss_conf->rss_key_len);
 
-	rss_hash_level = ETH_RSS_LEVEL(rss_conf->rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg =
@@ -390,7 +390,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	int rc;
 
 	/* Skip further configuration if selected mode is not RSS */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS || !qcnt)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
 		return 0;
 
 	/* Update default RSS key and cfg */
@@ -408,7 +408,7 @@ otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
 	}
 
 	rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
-	rss_hash_level = ETH_RSS_LEVEL(rss_hf);
+	rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
 	if (rss_hash_level)
 		rss_hash_level -= 1;
 	flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
index ffeade5952dc..986902287b67 100644
--- a/drivers/net/octeontx2/otx2_rx.c
+++ b/drivers/net/octeontx2/otx2_rx.c
@@ -414,12 +414,12 @@ NIX_RX_FASTPATH_MODES
 	/* For PTP enabled, scalar rx function should be chosen as most of the
 	 * PTP apps are implemented to rx burst 1 pkt.
 	 */
-	if (dev->scalar_ena || dev->rx_offloads & DEV_RX_OFFLOAD_TIMESTAMP)
+	if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
 		pick_rx_func(eth_dev, nix_eth_rx_burst);
 	else
 		pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
 
-	if (dev->rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
 
 	/* Copy multi seg version with no offload for tear down sequence */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
index ff299f00b913..c60190074926 100644
--- a/drivers/net/octeontx2/otx2_tx.c
+++ b/drivers/net/octeontx2/otx2_tx.c
@@ -1070,7 +1070,7 @@ NIX_TX_FASTPATH_MODES
 	else
 		pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
 
-	if (dev->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS)
+	if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 		pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
 
 	rte_mb();
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
index f5161e17a16d..cce643b7b51d 100644
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ b/drivers/net/octeontx2/otx2_vlan.c
@@ -50,7 +50,7 @@ nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
 
 	action = NIX_RX_ACTIONOP_UCAST;
 
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		action = NIX_RX_ACTIONOP_RSS;
 		action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
 	}
@@ -99,7 +99,7 @@ nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
 	 * Take offset from LA since in case of untagged packet,
 	 * lbptr is zero.
 	 */
-	if (type == ETH_VLAN_TYPE_OUTER) {
+	if (type == RTE_ETH_VLAN_TYPE_OUTER) {
 		vtag_action.act.vtag0_def = vtag_index;
 		vtag_action.act.vtag0_lid = NPC_LID_LA;
 		vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
@@ -413,7 +413,7 @@ nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
 		if (vlan->strip_on ||
 		    (vlan->qinq_on && !vlan->qinq_before_def)) {
 			if (eth_dev->data->dev_conf.rxmode.mq_mode ==
-								ETH_MQ_RX_RSS)
+								RTE_ETH_MQ_RX_RSS)
 				vlan->def_rx_mcam_ent.action |=
 							NIX_RX_ACTIONOP_RSS;
 			else
@@ -717,48 +717,48 @@ otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 
 	rxmode = &eth_dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, true);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			rc = nix_vlan_hw_strip(eth_dev, false);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
-			offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
+			offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, true, 0);
 		} else {
-			offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			rc = nix_vlan_hw_filter(eth_dev, false, 0);
 		}
 		if (rc)
 			goto done;
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
 		if (!dev->vlan_info.qinq_on) {
-			offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, true);
 			if (rc)
 				goto done;
 		}
 	} else {
 		if (dev->vlan_info.qinq_on) {
-			offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
+			offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 			rc = otx2_nix_config_double_vlan(eth_dev, false);
 			if (rc)
 				goto done;
 		}
 	}
 
-	if (offloads & (DEV_RX_OFFLOAD_VLAN_STRIP |
-			DEV_RX_OFFLOAD_QINQ_STRIP)) {
+	if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
+			RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
 		dev->rx_offloads |= offloads;
 		dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
 		otx2_eth_set_rx_function(eth_dev);
@@ -780,7 +780,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
 
 	tpid_cfg->tpid = tpid;
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
 	else
 		tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
@@ -789,7 +789,7 @@ otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
 	if (rc)
 		return rc;
 
-	if (type == ETH_VLAN_TYPE_OUTER)
+	if (type == RTE_ETH_VLAN_TYPE_OUTER)
 		dev->vlan_info.outer_vlan_tpid = tpid;
 	else
 		dev->vlan_info.inner_vlan_tpid = tpid;
@@ -864,7 +864,7 @@ otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev,       uint16_t vlan_id, int on)
 		vlan->outer_vlan_idx = 0;
 	}
 
-	rc = nix_vlan_handle_default_tx_entry(dev, ETH_VLAN_TYPE_OUTER,
+	rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
 					      vtag_index, on);
 	if (rc < 0) {
 		printf("Default tx entry failed with rc %d\n", rc);
@@ -986,12 +986,12 @@ otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
 	} else {
 		/* Reinstall all mcam entries now if filter offload is set */
 		if (eth_dev->data->dev_conf.rxmode.offloads &
-		    DEV_RX_OFFLOAD_VLAN_FILTER)
+		    RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			nix_vlan_reinstall_vlan_filters(eth_dev);
 	}
 
 	mask =
-	    ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK;
+	    RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
 	rc = otx2_nix_vlan_offload_set(eth_dev, mask);
 	if (rc) {
 		otx2_err("Failed to set vlan offload rc=%d", rc);
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index a243683d61d3..7bfa6098e230 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -33,15 +33,15 @@ otx_ep_dev_info_get(struct rte_eth_dev *eth_dev,
 
 	otx_epvf = OTX_EP_DEV(eth_dev);
 
-	devinfo->speed_capa = ETH_LINK_SPEED_10G;
+	devinfo->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	devinfo->max_rx_queues = otx_epvf->max_rx_queues;
 	devinfo->max_tx_queues = otx_epvf->max_tx_queues;
 
 	devinfo->min_rx_bufsize = OTX_EP_MIN_RX_BUF_SIZE;
 	devinfo->max_rx_pktlen = OTX_EP_MAX_PKT_SZ;
-	devinfo->rx_offload_capa = DEV_RX_OFFLOAD_JUMBO_FRAME;
-	devinfo->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
-	devinfo->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	devinfo->rx_offload_capa = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+	devinfo->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
+	devinfo->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	devinfo->max_mac_addrs = OTX_EP_MAX_MAC_ADDRS;
 
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index a7d433547e36..77593111f141 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -563,7 +563,7 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)rte_malloc(NULL,
@@ -697,7 +697,7 @@ otx2_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
 			struct otx_ep_buf_free_info *finfo;
 			int j, frags, num_sg;
 
-			if (!(otx_ep->tx_offloads & DEV_TX_OFFLOAD_MULTI_SEGS))
+			if (!(otx_ep->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS))
 				goto xmit_fail;
 
 			finfo = (struct otx_ep_buf_free_info *)
@@ -954,13 +954,13 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep,
 	droq_pkt->l4_len = hdr_lens.l4_len;
 
 	if ((droq_pkt->pkt_len > (RTE_ETHER_MAX_LEN + OTX_CUST_DATA_LEN)) &&
-	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)) {
+	    !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)) {
 		rte_pktmbuf_free(droq_pkt);
 		goto oq_read_fail;
 	}
 
 	if (droq_pkt->nb_segs > 1 &&
-	    !(otx_ep->rx_offloads & DEV_RX_OFFLOAD_SCATTER)) {
+	    !(otx_ep->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)) {
 		rte_pktmbuf_free(droq_pkt);
 		goto oq_read_fail;
 	}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index a8774b7a432a..13d18e875444 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -135,10 +135,10 @@ static const char *valid_arguments[] = {
 };
 
 static struct rte_eth_link pmd_link = {
-		.link_speed = ETH_SPEED_NUM_10G,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN,
-		.link_autoneg = ETH_LINK_FIXED,
+		.link_speed = RTE_ETH_SPEED_NUM_10G,
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN,
+		.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_pcap_logtype, NOTICE);
@@ -655,7 +655,7 @@ eth_dev_start(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED;
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -710,7 +710,7 @@ eth_dev_stop(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_tx_queues; i++)
 		dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED;
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	return 0;
 }
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index feec4d10a26e..a74f27bf8158 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -22,15 +22,15 @@ struct pfe_vdev_init_params {
 static struct pfe *g_pfe;
 /* Supported Rx offloads */
 static uint64_t dev_rx_offloads_sup =
-		DEV_RX_OFFLOAD_IPV4_CKSUM |
-		DEV_RX_OFFLOAD_UDP_CKSUM |
-		DEV_RX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_RX_OFFLOAD_TCP_CKSUM;
 
 /* Supported Tx offloads */
 static uint64_t dev_tx_offloads_sup =
-		DEV_TX_OFFLOAD_IPV4_CKSUM |
-		DEV_TX_OFFLOAD_UDP_CKSUM |
-		DEV_TX_OFFLOAD_TCP_CKSUM;
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 
 /* TODO: make pfe_svr a runtime option.
  * Driver should be able to get the SVR
@@ -613,9 +613,9 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 	}
 
 	link.link_status = lstatus;
-	link.link_speed = ETH_LINK_SPEED_1G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_speed = RTE_ETH_LINK_SPEED_1G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	pfe_eth_atomic_write_link_status(dev, &link);
 
diff --git a/drivers/net/qede/base/mcp_public.h b/drivers/net/qede/base/mcp_public.h
index 6667c2d7ab6d..511742c6a1b3 100644
--- a/drivers/net/qede/base/mcp_public.h
+++ b/drivers/net/qede/base/mcp_public.h
@@ -65,8 +65,8 @@ typedef u32 offsize_t;      /* In DWORDS !!! */
 struct eth_phy_cfg {
 /* 0 = autoneg, 1000/10000/20000/25000/40000/50000/100000 */
 	u32 speed;
-#define ETH_SPEED_AUTONEG   0
-#define ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
+#define RTE_ETH_SPEED_AUTONEG   0
+#define RTE_ETH_SPEED_SMARTLINQ  0x8 /* deprecated - use link_modes field instead */
 
 	u32 pause;      /* bitmask */
 #define ETH_PAUSE_NONE		0x0
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 323d46e6ebb2..81c35358dc57 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -342,9 +342,9 @@ qede_assign_rxtx_handlers(struct rte_eth_dev *dev, bool is_dummy)
 	}
 
 	use_tx_offload = !!(tx_offloads &
-			    (DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
-			     DEV_TX_OFFLOAD_TCP_TSO | /* tso */
-			     DEV_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
+			    (RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | /* tunnel */
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO | /* tso */
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT)); /* vlan insert */
 
 	if (use_tx_offload) {
 		DP_INFO(edev, "Assigning qede_xmit_pkts\n");
@@ -1002,16 +1002,16 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
 	uint64_t rx_offloads = eth_dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			(void)qede_vlan_stripping(eth_dev, 1);
 		else
 			(void)qede_vlan_stripping(eth_dev, 0);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
 		/* VLAN filtering kicks in when a VLAN is added */
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
 			qede_vlan_filter_set(eth_dev, 0, 1);
 		} else {
 			if (qdev->configured_vlans > 1) { /* Excluding VLAN0 */
@@ -1022,7 +1022,7 @@ static int qede_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
 				 * enabled
 				 */
 				eth_dev->data->dev_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_VLAN_FILTER;
+						RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 			} else {
 				qede_vlan_filter_set(eth_dev, 0, 0);
 			}
@@ -1069,11 +1069,11 @@ int qede_config_rss(struct rte_eth_dev *eth_dev)
 	/* Configure default RETA */
 	memset(reta_conf, 0, sizeof(reta_conf));
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < ECORE_RSS_IND_TABLE_SIZE; i++) {
-		id = i / RTE_RETA_GROUP_SIZE;
-		pos = i % RTE_RETA_GROUP_SIZE;
+		id = i / RTE_ETH_RETA_GROUP_SIZE;
+		pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		q = i % QEDE_RSS_COUNT(eth_dev);
 		reta_conf[id].reta[pos] = q;
 	}
@@ -1112,12 +1112,12 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	}
 
 	/* Configure TPA parameters */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (qede_enable_tpa(eth_dev, true))
 			return -EINVAL;
 		/* Enable scatter mode for LRO */
 		if (!eth_dev->data->scattered_rx)
-			rxmode->offloads |= DEV_RX_OFFLOAD_SCATTER;
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	}
 
 	/* Start queues */
@@ -1132,7 +1132,7 @@ static int qede_dev_start(struct rte_eth_dev *eth_dev)
 	 * Also, we would like to retain similar behavior in PF case, so we
 	 * don't do PF/VF specific check here.
 	 */
-	if (eth_dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		if (qede_config_rss(eth_dev))
 			goto err;
 
@@ -1272,8 +1272,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 
 	PMD_INIT_FUNC_TRACE(edev);
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* We need to have min 1 RX queue.There is no min check in
 	 * rte_eth_dev_configure(), so we are checking it here.
@@ -1291,8 +1291,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 		DP_NOTICE(edev, false,
 			  "Invalid devargs supplied, requested change will not take effect\n");
 
-	if (!(rxmode->mq_mode == ETH_MQ_RX_NONE ||
-	      rxmode->mq_mode == ETH_MQ_RX_RSS)) {
+	if (!(rxmode->mq_mode == RTE_ETH_MQ_RX_NONE ||
+	      rxmode->mq_mode == RTE_ETH_MQ_RX_RSS)) {
 		DP_ERR(edev, "Unsupported multi-queue mode\n");
 		return -ENOTSUP;
 	}
@@ -1313,12 +1313,12 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	}
 
 	/* If jumbo enabled adjust MTU */
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		eth_dev->data->mtu =
 			eth_dev->data->dev_conf.rxmode.max_rx_pkt_len -
 			RTE_ETHER_HDR_LEN - QEDE_ETH_OVERHEAD;
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		eth_dev->data->scattered_rx = 1;
 
 	if (qede_start_vport(qdev, eth_dev->data->mtu))
@@ -1327,8 +1327,8 @@ static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 	qdev->mtu = eth_dev->data->mtu;
 
 	/* Enable VLAN offloads by default */
-	ret = qede_vlan_offload_set(eth_dev, ETH_VLAN_STRIP_MASK  |
-					     ETH_VLAN_FILTER_MASK);
+	ret = qede_vlan_offload_set(eth_dev, RTE_ETH_VLAN_STRIP_MASK  |
+					     RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -1391,35 +1391,35 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	dev_info->reta_size = ECORE_RSS_IND_TABLE_SIZE;
 	dev_info->hash_key_size = ECORE_RSS_KEY_SIZE * sizeof(uint32_t);
 	dev_info->flow_type_rss_offloads = (uint64_t)QEDE_RSS_OFFLOAD_ALL;
-	dev_info->rx_offload_capa = (DEV_RX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_RX_OFFLOAD_UDP_CKSUM	|
-				     DEV_RX_OFFLOAD_TCP_CKSUM	|
-				     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_RX_OFFLOAD_TCP_LRO	|
-				     DEV_RX_OFFLOAD_KEEP_CRC    |
-				     DEV_RX_OFFLOAD_SCATTER	|
-				     DEV_RX_OFFLOAD_JUMBO_FRAME |
-				     DEV_RX_OFFLOAD_VLAN_FILTER |
-				     DEV_RX_OFFLOAD_VLAN_STRIP  |
-				     DEV_RX_OFFLOAD_RSS_HASH);
+	dev_info->rx_offload_capa = (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_RX_OFFLOAD_TCP_LRO	|
+				     RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+				     RTE_ETH_RX_OFFLOAD_SCATTER	|
+				     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+				     RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+				     RTE_ETH_RX_OFFLOAD_VLAN_STRIP  |
+				     RTE_ETH_RX_OFFLOAD_RSS_HASH);
 	dev_info->rx_queue_offload_capa = 0;
 
 	/* TX offloads are on a per-packet basis, so it is applicable
 	 * to both at port and queue levels.
 	 */
-	dev_info->tx_offload_capa = (DEV_TX_OFFLOAD_VLAN_INSERT	|
-				     DEV_TX_OFFLOAD_IPV4_CKSUM	|
-				     DEV_TX_OFFLOAD_UDP_CKSUM	|
-				     DEV_TX_OFFLOAD_TCP_CKSUM	|
-				     DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				     DEV_TX_OFFLOAD_MULTI_SEGS  |
-				     DEV_TX_OFFLOAD_TCP_TSO	|
-				     DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				     DEV_TX_OFFLOAD_GENEVE_TNL_TSO);
+	dev_info->tx_offload_capa = (RTE_ETH_TX_OFFLOAD_VLAN_INSERT	|
+				     RTE_ETH_TX_OFFLOAD_IPV4_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_UDP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_TCP_CKSUM	|
+				     RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				     RTE_ETH_TX_OFFLOAD_MULTI_SEGS  |
+				     RTE_ETH_TX_OFFLOAD_TCP_TSO	|
+				     RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				     RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO);
 	dev_info->tx_queue_offload_capa = dev_info->tx_offload_capa;
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	};
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -1431,17 +1431,17 @@ qede_dev_info_get(struct rte_eth_dev *eth_dev,
 	memset(&link, 0, sizeof(struct qed_link_output));
 	qdev->ops->common->get_link(edev, &link);
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_1G)
-		speed_cap |= ETH_LINK_SPEED_1G;
+		speed_cap |= RTE_ETH_LINK_SPEED_1G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_10G)
-		speed_cap |= ETH_LINK_SPEED_10G;
+		speed_cap |= RTE_ETH_LINK_SPEED_10G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_25G)
-		speed_cap |= ETH_LINK_SPEED_25G;
+		speed_cap |= RTE_ETH_LINK_SPEED_25G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_40G)
-		speed_cap |= ETH_LINK_SPEED_40G;
+		speed_cap |= RTE_ETH_LINK_SPEED_40G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_50G)
-		speed_cap |= ETH_LINK_SPEED_50G;
+		speed_cap |= RTE_ETH_LINK_SPEED_50G;
 	if (link.adv_speed & NVM_CFG1_PORT_DRV_SPEED_CAPABILITY_MASK_BB_100G)
-		speed_cap |= ETH_LINK_SPEED_100G;
+		speed_cap |= RTE_ETH_LINK_SPEED_100G;
 	dev_info->speed_capa = speed_cap;
 
 	return 0;
@@ -1468,10 +1468,10 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	/* Link Mode */
 	switch (q_link.duplex) {
 	case QEDE_DUPLEX_HALF:
-		link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case QEDE_DUPLEX_FULL:
-		link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case QEDE_DUPLEX_UNKNOWN:
 	default:
@@ -1480,11 +1480,11 @@ qede_link_update(struct rte_eth_dev *eth_dev, __rte_unused int wait_to_complete)
 	link.link_duplex = link_duplex;
 
 	/* Link Status */
-	link.link_status = q_link.link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link.link_status = q_link.link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	/* AN */
 	link.link_autoneg = (q_link.supported_caps & QEDE_SUPPORTED_AUTONEG) ?
-			     ETH_LINK_AUTONEG : ETH_LINK_FIXED;
+			     RTE_ETH_LINK_AUTONEG : RTE_ETH_LINK_FIXED;
 
 	DP_INFO(edev, "Link - Speed %u Mode %u AN %u Status %u\n",
 		link.link_speed, link.link_duplex,
@@ -2019,12 +2019,12 @@ static int qede_flow_ctrl_set(struct rte_eth_dev *eth_dev,
 	}
 
 	/* Pause is assumed to be supported (SUPPORTED_Pause) */
-	if (fc_conf->mode == RTE_FC_FULL)
+	if (fc_conf->mode == RTE_ETH_FC_FULL)
 		params.pause_config |= (QED_LINK_PAUSE_TX_ENABLE |
 					QED_LINK_PAUSE_RX_ENABLE);
-	if (fc_conf->mode == RTE_FC_TX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_TX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_TX_ENABLE;
-	if (fc_conf->mode == RTE_FC_RX_PAUSE)
+	if (fc_conf->mode == RTE_ETH_FC_RX_PAUSE)
 		params.pause_config |= QED_LINK_PAUSE_RX_ENABLE;
 
 	params.link_up = true;
@@ -2048,13 +2048,13 @@ static int qede_flow_ctrl_get(struct rte_eth_dev *eth_dev,
 
 	if (current_link.pause_config & (QED_LINK_PAUSE_RX_ENABLE |
 					 QED_LINK_PAUSE_TX_ENABLE))
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (current_link.pause_config & QED_LINK_PAUSE_RX_ENABLE)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (current_link.pause_config & QED_LINK_PAUSE_TX_ENABLE)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -2095,14 +2095,14 @@ qede_dev_supported_ptypes_get(struct rte_eth_dev *eth_dev)
 static void qede_init_rss_caps(uint8_t *rss_caps, uint64_t hf)
 {
 	*rss_caps = 0;
-	*rss_caps |= (hf & ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
-	*rss_caps |= (hf & ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV4)              ? ECORE_RSS_IPV4 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6)              ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_EX)           ? ECORE_RSS_IPV6 : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)  ? ECORE_RSS_IPV4_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)  ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_IPV6_TCP_EX)       ? ECORE_RSS_IPV6_TCP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)  ? ECORE_RSS_IPV4_UDP : 0;
+	*rss_caps |= (hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)  ? ECORE_RSS_IPV6_UDP : 0;
 }
 
 int qede_rss_hash_update(struct rte_eth_dev *eth_dev,
@@ -2228,7 +2228,7 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 	uint8_t entry;
 	int rc = 0;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported by hardware\n",
 		       reta_size);
 		return -EINVAL;
@@ -2252,8 +2252,8 @@ int qede_rss_reta_update(struct rte_eth_dev *eth_dev,
 
 	for_each_hwfn(edev, i) {
 		for (j = 0; j < reta_size; j++) {
-			idx = j / RTE_RETA_GROUP_SIZE;
-			shift = j % RTE_RETA_GROUP_SIZE;
+			idx = j / RTE_ETH_RETA_GROUP_SIZE;
+			shift = j % RTE_ETH_RETA_GROUP_SIZE;
 			if (reta_conf[idx].mask & (1ULL << shift)) {
 				entry = reta_conf[idx].reta[shift];
 				fid = entry * edev->num_hwfns + i;
@@ -2289,15 +2289,15 @@ static int qede_rss_reta_query(struct rte_eth_dev *eth_dev,
 	uint16_t i, idx, shift;
 	uint8_t entry;
 
-	if (reta_size > ETH_RSS_RETA_SIZE_128) {
+	if (reta_size > RTE_ETH_RSS_RETA_SIZE_128) {
 		DP_ERR(edev, "reta_size %d is not supported\n",
 		       reta_size);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if (reta_conf[idx].mask & (1ULL << shift)) {
 			entry = qdev->rss_ind_table[i];
 			reta_conf[idx].reta[shift] = entry;
@@ -2369,9 +2369,9 @@ static int qede_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 		}
 	}
 	if (frame_size > QEDE_ETH_MAX_LEN)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		dev->data->dev_conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		dev->data->dev_conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	if (!dev->data->dev_started && restart) {
 		qede_dev_start(dev);
diff --git a/drivers/net/qede/qede_filter.c b/drivers/net/qede/qede_filter.c
index c756594bfc4b..ceb47c17d0d6 100644
--- a/drivers/net/qede/qede_filter.c
+++ b/drivers/net/qede/qede_filter.c
@@ -144,7 +144,7 @@ int qede_check_fdir_support(struct rte_eth_dev *eth_dev)
 {
 	struct qede_dev *qdev = QEDE_INIT_QDEV(eth_dev);
 	struct ecore_dev *edev = QEDE_INIT_EDEV(qdev);
-	struct rte_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
+	struct rte_eth_fdir_conf *fdir = &eth_dev->data->dev_conf.fdir_conf;
 
 	/* check FDIR modes */
 	switch (fdir->mode) {
@@ -542,7 +542,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -570,7 +570,7 @@ qede_udp_dst_port_del(struct rte_eth_dev *eth_dev,
 					ECORE_TUNN_CLSS_MAC_VLAN, false);
 
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port != tunnel_udp->udp_port) {
 			DP_ERR(edev, "UDP port %u doesn't exist\n",
 				tunnel_udp->udp_port);
@@ -622,7 +622,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 	memset(&tunn, 0, sizeof(tunn));
 
 	switch (tunnel_udp->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (qdev->vxlan.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for VXLAN was already configured\n",
@@ -659,7 +659,7 @@ qede_udp_dst_port_add(struct rte_eth_dev *eth_dev,
 
 		qdev->vxlan.udp_port = udp_port;
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (qdev->geneve.udp_port == tunnel_udp->udp_port) {
 			DP_INFO(edev,
 				"UDP port %u for GENEVE was already configured\n",
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index 298f4e3e4273..144dfef269f3 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -249,7 +249,7 @@ qede_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qid,
 	bufsz = (uint16_t)rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
 	/* cache align the mbuf size to simplfy rx_buf_size calculation */
 	bufsz = QEDE_FLOOR_TO_CACHE_LINE_SIZE(bufsz);
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_SCATTER)	||
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)	||
 	    (max_rx_pkt_len + QEDE_ETH_OVERHEAD) > bufsz) {
 		if (!dev->data->scattered_rx) {
 			DP_INFO(edev, "Forcing scatter-gather mode\n");
diff --git a/drivers/net/qede/qede_rxtx.h b/drivers/net/qede/qede_rxtx.h
index c9334448c887..15112b83f4f7 100644
--- a/drivers/net/qede/qede_rxtx.h
+++ b/drivers/net/qede/qede_rxtx.h
@@ -73,14 +73,14 @@
 #define QEDE_MAX_ETHER_HDR_LEN	(RTE_ETHER_HDR_LEN + QEDE_ETH_OVERHEAD)
 #define QEDE_ETH_MAX_LEN	(RTE_ETHER_MTU + QEDE_MAX_ETHER_HDR_LEN)
 
-#define QEDE_RSS_OFFLOAD_ALL    (ETH_RSS_IPV4			|\
-				 ETH_RSS_NONFRAG_IPV4_TCP	|\
-				 ETH_RSS_NONFRAG_IPV4_UDP	|\
-				 ETH_RSS_IPV6			|\
-				 ETH_RSS_NONFRAG_IPV6_TCP	|\
-				 ETH_RSS_NONFRAG_IPV6_UDP	|\
-				 ETH_RSS_VXLAN			|\
-				 ETH_RSS_GENEVE)
+#define QEDE_RSS_OFFLOAD_ALL    (RTE_ETH_RSS_IPV4			|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV4_UDP	|\
+				 RTE_ETH_RSS_IPV6			|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_TCP	|\
+				 RTE_ETH_RSS_NONFRAG_IPV6_UDP	|\
+				 RTE_ETH_RSS_VXLAN			|\
+				 RTE_ETH_RSS_GENEVE)
 
 #define QEDE_RXTX_MAX(qdev) \
 	(RTE_MAX(qdev->num_rx_queues, qdev->num_tx_queues))
diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 1faf38a714cf..8d1ef5fb22bc 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -56,10 +56,10 @@ struct pmd_internals {
 };
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 RTE_LOG_REGISTER_DEFAULT(eth_ring_logtype, NOTICE);
@@ -102,7 +102,7 @@ eth_dev_configure(struct rte_eth_dev *dev __rte_unused) { return 0; }
 static int
 eth_dev_start(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -110,21 +110,21 @@ static int
 eth_dev_stop(struct rte_eth_dev *dev)
 {
 	dev->data->dev_started = 0;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_down(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return 0;
 }
 
 static int
 eth_dev_set_link_up(struct rte_eth_dev *dev)
 {
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return 0;
 }
 
@@ -163,8 +163,8 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = 1;
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
 
diff --git a/drivers/net/sfc/sfc.c b/drivers/net/sfc/sfc.c
index 274a98e228e4..d93f9d2418b9 100644
--- a/drivers/net/sfc/sfc.c
+++ b/drivers/net/sfc/sfc.c
@@ -81,13 +81,13 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 {
 	uint32_t phy_caps = 0;
 
-	if (~speeds & ETH_LINK_SPEED_FIXED) {
+	if (~speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		phy_caps |= (1 << EFX_PHY_CAP_AN);
 		/*
 		 * If no speeds are specified in the mask, any supported
 		 * may be negotiated
 		 */
-		if (speeds == ETH_LINK_SPEED_AUTONEG)
+		if (speeds == RTE_ETH_LINK_SPEED_AUTONEG)
 			phy_caps |=
 				(1 << EFX_PHY_CAP_1000FDX) |
 				(1 << EFX_PHY_CAP_10000FDX) |
@@ -96,17 +96,17 @@ sfc_phy_cap_from_link_speeds(uint32_t speeds)
 				(1 << EFX_PHY_CAP_50000FDX) |
 				(1 << EFX_PHY_CAP_100000FDX);
 	}
-	if (speeds & ETH_LINK_SPEED_1G)
+	if (speeds & RTE_ETH_LINK_SPEED_1G)
 		phy_caps |= (1 << EFX_PHY_CAP_1000FDX);
-	if (speeds & ETH_LINK_SPEED_10G)
+	if (speeds & RTE_ETH_LINK_SPEED_10G)
 		phy_caps |= (1 << EFX_PHY_CAP_10000FDX);
-	if (speeds & ETH_LINK_SPEED_25G)
+	if (speeds & RTE_ETH_LINK_SPEED_25G)
 		phy_caps |= (1 << EFX_PHY_CAP_25000FDX);
-	if (speeds & ETH_LINK_SPEED_40G)
+	if (speeds & RTE_ETH_LINK_SPEED_40G)
 		phy_caps |= (1 << EFX_PHY_CAP_40000FDX);
-	if (speeds & ETH_LINK_SPEED_50G)
+	if (speeds & RTE_ETH_LINK_SPEED_50G)
 		phy_caps |= (1 << EFX_PHY_CAP_50000FDX);
-	if (speeds & ETH_LINK_SPEED_100G)
+	if (speeds & RTE_ETH_LINK_SPEED_100G)
 		phy_caps |= (1 << EFX_PHY_CAP_100000FDX);
 
 	return phy_caps;
@@ -337,10 +337,10 @@ sfc_set_fw_subvariant(struct sfc_adapter *sa)
 			tx_offloads |= txq_info->offloads;
 	}
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM |
-			   DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM))
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_DEFAULT;
 	else
 		req_fw_subvariant = EFX_NIC_FW_SUBVARIANT_NO_TX_CSUM;
@@ -827,7 +827,7 @@ sfc_attach(struct sfc_adapter *sa)
 	sa->priv.shared->tunnel_encaps =
 		encp->enc_tunnel_encapsulations_supported;
 
-	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & DEV_TX_OFFLOAD_TCP_TSO) {
+	if (sfc_dp_tx_offload_capa(sa->priv.dp_tx) & RTE_ETH_TX_OFFLOAD_TCP_TSO) {
 		sa->tso = encp->enc_fw_assisted_tso_v2_enabled ||
 			  encp->enc_tso_v3_enabled;
 		if (!sa->tso)
@@ -836,8 +836,8 @@ sfc_attach(struct sfc_adapter *sa)
 
 	if (sa->tso &&
 	    (sfc_dp_tx_offload_capa(sa->priv.dp_tx) &
-	     (DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-	      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
+	     (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+	      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) != 0) {
 		sa->tso_encap = encp->enc_fw_assisted_tso_v2_encap_enabled ||
 				encp->enc_tso_v3_enabled;
 		if (!sa->tso_encap)
diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c
index d4cb96881cd2..ca8774ad0950 100644
--- a/drivers/net/sfc/sfc_ef100_rx.c
+++ b/drivers/net/sfc/sfc_ef100_rx.c
@@ -916,11 +916,11 @@ struct sfc_dp_rx sfc_ef100_rx = {
 	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS |
 				  SFC_DP_RX_FEAT_INTR,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_RX_OFFLOAD_SCATTER |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_SCATTER |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.get_dev_info		= sfc_ef100_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_rx_qsize_up_rings,
 	.qcreate		= sfc_ef100_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef100_tx.c b/drivers/net/sfc/sfc_ef100_tx.c
index 522e9a0d3470..7c91ee3fcb53 100644
--- a/drivers/net/sfc/sfc_ef100_tx.c
+++ b/drivers/net/sfc/sfc_ef100_tx.c
@@ -942,16 +942,16 @@ struct sfc_dp_tx sfc_ef100_tx = {
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
 	.dev_offload_capa	= 0,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_MULTI_SEGS |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef100_get_dev_info,
 	.qsize_up_rings		= sfc_ef100_tx_qsize_up_rings,
 	.qcreate		= sfc_ef100_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_essb_rx.c b/drivers/net/sfc/sfc_ef10_essb_rx.c
index 991329e86f01..9ea207cca163 100644
--- a/drivers/net/sfc/sfc_ef10_essb_rx.c
+++ b/drivers/net/sfc/sfc_ef10_essb_rx.c
@@ -746,8 +746,8 @@ struct sfc_dp_rx sfc_ef10_essb_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_FLOW_FLAG |
 				  SFC_DP_RX_FEAT_FLOW_MARK,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
 	.queue_offload_capa	= 0,
 	.get_dev_info		= sfc_ef10_essb_rx_get_dev_info,
 	.pool_ops_supported	= sfc_ef10_essb_rx_pool_ops_supported,
diff --git a/drivers/net/sfc/sfc_ef10_rx.c b/drivers/net/sfc/sfc_ef10_rx.c
index 49a7d4fb42fd..9aaabd30eee6 100644
--- a/drivers/net/sfc/sfc_ef10_rx.c
+++ b/drivers/net/sfc/sfc_ef10_rx.c
@@ -819,10 +819,10 @@ struct sfc_dp_rx sfc_ef10_rx = {
 	},
 	.features		= SFC_DP_RX_FEAT_MULTI_PROCESS |
 				  SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.get_dev_info		= sfc_ef10_rx_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_rx_qsize_up_rings,
 	.qcreate		= sfc_ef10_rx_qcreate,
diff --git a/drivers/net/sfc/sfc_ef10_tx.c b/drivers/net/sfc/sfc_ef10_tx.c
index ed43adb4ca5c..e7da4608bcb0 100644
--- a/drivers/net/sfc/sfc_ef10_tx.c
+++ b/drivers/net/sfc/sfc_ef10_tx.c
@@ -958,9 +958,9 @@ sfc_ef10_tx_qcreate(uint16_t port_id, uint16_t queue_id,
 	if (txq->sw_ring == NULL)
 		goto fail_sw_ring_alloc;
 
-	if (info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-			      DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-			      DEV_TX_OFFLOAD_GENEVE_TNL_TSO)) {
+	if (info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+			      RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+			      RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO)) {
 		txq->tsoh = rte_calloc_socket("sfc-ef10-txq-tsoh",
 					      info->txq_entries,
 					      SFC_TSOH_STD_LEN,
@@ -1125,14 +1125,14 @@ struct sfc_dp_tx sfc_ef10_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_EF10,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
@@ -1152,11 +1152,11 @@ struct sfc_dp_tx sfc_ef10_simple_tx = {
 		.type		= SFC_DP_TX,
 	},
 	.features		= SFC_DP_TX_FEAT_MULTI_PROCESS,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_MBUF_FAST_FREE,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM,
 	.get_dev_info		= sfc_ef10_get_dev_info,
 	.qsize_up_rings		= sfc_ef10_tx_qsize_up_rings,
 	.qcreate		= sfc_ef10_tx_qcreate,
diff --git a/drivers/net/sfc/sfc_ethdev.c b/drivers/net/sfc/sfc_ethdev.c
index 2db0d000c3ad..33f800c46e59 100644
--- a/drivers/net/sfc/sfc_ethdev.c
+++ b/drivers/net/sfc/sfc_ethdev.c
@@ -102,19 +102,19 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_vfs = sa->sriov.num_vfs;
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_1000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_1G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_1G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_10000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_10G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_25000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_25G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_25G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_40000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_50000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_50G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_50G;
 	if (sa->port.phy_adv_cap_mask & (1u << EFX_PHY_CAP_100000FDX))
-		dev_info->speed_capa |= ETH_LINK_SPEED_100G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100G;
 
 	dev_info->max_rx_queues = sa->rxq_max;
 	dev_info->max_tx_queues = sa->txq_max;
@@ -142,8 +142,8 @@ sfc_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_offload_capa = sfc_tx_get_dev_offload_caps(sa) |
 				    dev_info->tx_queue_offload_capa;
 
-	if (dev_info->tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		txq_offloads_def |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info->tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		txq_offloads_def |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	dev_info->default_txconf.offloads |= txq_offloads_def;
 
@@ -912,16 +912,16 @@ sfc_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 
 	switch (link_fc) {
 	case 0:
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 		break;
 	case EFX_FCNTL_RESPOND:
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 		break;
 	case EFX_FCNTL_GENERATE:
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 		break;
 	case (EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE):
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 		break;
 	default:
 		sfc_err(sa, "%s: unexpected flow control value %#x",
@@ -952,16 +952,16 @@ sfc_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 	}
 
 	switch (fc_conf->mode) {
-	case RTE_FC_NONE:
+	case RTE_ETH_FC_NONE:
 		fcntl = 0;
 		break;
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		fcntl = EFX_FCNTL_RESPOND;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		fcntl = EFX_FCNTL_GENERATE;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		fcntl = EFX_FCNTL_RESPOND | EFX_FCNTL_GENERATE;
 		break;
 	default:
@@ -1070,7 +1070,7 @@ sfc_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 	 */
 	if (mtu > RTE_ETHER_MTU) {
 		struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode;
-		rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	}
 
 	dev->data->dev_conf.rxmode.max_rx_pkt_len = sa->port.pdu;
@@ -1247,7 +1247,7 @@ sfc_rx_queue_info_get(struct rte_eth_dev *dev, uint16_t ethdev_qid,
 	qinfo->conf.rx_deferred_start = rxq_info->deferred_start;
 	qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads;
 	if (rxq_info->type_flags & EFX_RXQ_FLAG_SCATTER) {
-		qinfo->conf.offloads |= DEV_RX_OFFLOAD_SCATTER;
+		qinfo->conf.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
 		qinfo->scattered_rx = 1;
 	}
 	qinfo->nb_desc = rxq_info->entries;
@@ -1472,9 +1472,9 @@ static efx_tunnel_protocol_t
 sfc_tunnel_rte_type_to_efx_udp_proto(enum rte_eth_tunnel_type rte_type)
 {
 	switch (rte_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		return EFX_TUNNEL_PROTOCOL_VXLAN;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		return EFX_TUNNEL_PROTOCOL_GENEVE;
 	default:
 		return EFX_TUNNEL_NPROTOS;
@@ -1601,7 +1601,7 @@ sfc_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 
 	/*
 	 * Mapping of hash configuration between RTE and EFX is not one-to-one,
-	 * hence, conversion is done here to derive a correct set of ETH_RSS
+	 * hence, conversion is done here to derive a correct set of RTE_ETH_RSS
 	 * flags which corresponds to the active EFX configuration stored
 	 * locally in 'sfc_adapter' and kept up-to-date
 	 */
@@ -1727,8 +1727,8 @@ sfc_dev_rss_reta_query(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp = entry / RTE_RETA_GROUP_SIZE;
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp = entry / RTE_ETH_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 
 		if ((reta_conf[grp].mask >> grp_idx) & 1)
 			reta_conf[grp].reta[grp_idx] = rss->tbl[entry];
@@ -1777,10 +1777,10 @@ sfc_dev_rss_reta_update(struct rte_eth_dev *dev,
 	rte_memcpy(rss_tbl_new, rss->tbl, sizeof(rss->tbl));
 
 	for (entry = 0; entry < reta_size; entry++) {
-		int grp_idx = entry % RTE_RETA_GROUP_SIZE;
+		int grp_idx = entry % RTE_ETH_RETA_GROUP_SIZE;
 		struct rte_eth_rss_reta_entry64 *grp;
 
-		grp = &reta_conf[entry / RTE_RETA_GROUP_SIZE];
+		grp = &reta_conf[entry / RTE_ETH_RETA_GROUP_SIZE];
 
 		if (grp->mask & (1ull << grp_idx)) {
 			if (grp->reta[grp_idx] >= rss->channels) {
diff --git a/drivers/net/sfc/sfc_flow.c b/drivers/net/sfc/sfc_flow.c
index 4f5993a68d23..dc2cdfea13c4 100644
--- a/drivers/net/sfc/sfc_flow.c
+++ b/drivers/net/sfc/sfc_flow.c
@@ -390,7 +390,7 @@ sfc_flow_parse_vlan(const struct rte_flow_item *item,
 	const struct rte_flow_item_vlan *spec = NULL;
 	const struct rte_flow_item_vlan *mask = NULL;
 	const struct rte_flow_item_vlan supp_mask = {
-		.tci = rte_cpu_to_be_16(ETH_VLAN_ID_MAX),
+		.tci = rte_cpu_to_be_16(RTE_ETH_VLAN_ID_MAX),
 		.inner_type = RTE_BE16(0xffff),
 	};
 
diff --git a/drivers/net/sfc/sfc_port.c b/drivers/net/sfc/sfc_port.c
index adb2b2cb8175..dea5272a79bc 100644
--- a/drivers/net/sfc/sfc_port.c
+++ b/drivers/net/sfc/sfc_port.c
@@ -387,7 +387,7 @@ sfc_port_configure(struct sfc_adapter *sa)
 
 	sfc_log_init(sa, "entry");
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME)
 		port->pdu = rxmode->max_rx_pkt_len;
 	else
 		port->pdu = EFX_MAC_PDU(dev_data->mtu);
@@ -577,66 +577,66 @@ sfc_port_link_mode_to_info(efx_link_mode_t link_mode,
 
 	memset(link_info, 0, sizeof(*link_info));
 	if ((link_mode == EFX_LINK_DOWN) || (link_mode == EFX_LINK_UNKNOWN))
-		link_info->link_status = ETH_LINK_DOWN;
+		link_info->link_status = RTE_ETH_LINK_DOWN;
 	else
-		link_info->link_status = ETH_LINK_UP;
+		link_info->link_status = RTE_ETH_LINK_UP;
 
 	switch (link_mode) {
 	case EFX_LINK_10HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_10FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_100FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100M;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100M;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_1000HDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 		break;
 	case EFX_LINK_1000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_1G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_1G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_10000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_10G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_10G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_25000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_25G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_25G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_40000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_40G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_40G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_50000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_50G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_50G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	case EFX_LINK_100000FDX:
-		link_info->link_speed  = ETH_SPEED_NUM_100G;
-		link_info->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_100G;
+		link_info->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		break;
 	default:
 		SFC_ASSERT(B_FALSE);
 		/* FALLTHROUGH */
 	case EFX_LINK_UNKNOWN:
 	case EFX_LINK_DOWN:
-		link_info->link_speed  = ETH_SPEED_NUM_NONE;
+		link_info->link_speed  = RTE_ETH_SPEED_NUM_NONE;
 		link_info->link_duplex = 0;
 		break;
 	}
 
-	link_info->link_autoneg = ETH_LINK_AUTONEG;
+	link_info->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 int
diff --git a/drivers/net/sfc/sfc_rx.c b/drivers/net/sfc/sfc_rx.c
index 280e8a61f9e0..a83b47a8d111 100644
--- a/drivers/net/sfc/sfc_rx.c
+++ b/drivers/net/sfc/sfc_rx.c
@@ -647,9 +647,9 @@ struct sfc_dp_rx sfc_efx_rx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_RX_EFX,
 	},
 	.features		= SFC_DP_RX_FEAT_INTR,
-	.dev_offload_capa	= DEV_RX_OFFLOAD_CHECKSUM |
-				  DEV_RX_OFFLOAD_RSS_HASH,
-	.queue_offload_capa	= DEV_RX_OFFLOAD_SCATTER,
+	.dev_offload_capa	= RTE_ETH_RX_OFFLOAD_CHECKSUM |
+				  RTE_ETH_RX_OFFLOAD_RSS_HASH,
+	.queue_offload_capa	= RTE_ETH_RX_OFFLOAD_SCATTER,
 	.qsize_up_rings		= sfc_efx_rx_qsize_up_rings,
 	.qcreate		= sfc_efx_rx_qcreate,
 	.qdestroy		= sfc_efx_rx_qdestroy,
@@ -930,7 +930,7 @@ sfc_rx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (encp->enc_tunnel_encapsulations_supported == 0)
-		no_caps |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	return ~no_caps;
 }
@@ -940,7 +940,7 @@ sfc_rx_get_dev_offload_caps(struct sfc_adapter *sa)
 {
 	uint64_t caps = sa->priv.dp_rx->dev_offload_capa;
 
-	caps |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	caps |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	return caps & sfc_rx_get_offload_mask(sa);
 }
@@ -1141,7 +1141,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 
 	if (!sfc_rx_check_scatter(sa->port.pdu, buf_size,
 				  encp->enc_rx_prefix_size,
-				  (offloads & DEV_RX_OFFLOAD_SCATTER),
+				  (offloads & RTE_ETH_RX_OFFLOAD_SCATTER),
 				  encp->enc_rx_scatter_max,
 				  &error)) {
 		sfc_err(sa, "RxQ %d (internal %u) MTU check failed: %s",
@@ -1167,15 +1167,15 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 		rxq_info->type = EFX_RXQ_TYPE_DEFAULT;
 
 	rxq_info->type_flags |=
-		(offloads & DEV_RX_OFFLOAD_SCATTER) ?
+		(offloads & RTE_ETH_RX_OFFLOAD_SCATTER) ?
 		EFX_RXQ_FLAG_SCATTER : EFX_RXQ_FLAG_NONE;
 
 	if ((encp->enc_tunnel_encapsulations_supported != 0) &&
 	    (sfc_dp_rx_offload_capa(sa->priv.dp_rx) &
-	     DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
+	     RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) != 0)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_INNER_CLASSES;
 
-	if (offloads & DEV_RX_OFFLOAD_RSS_HASH)
+	if (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)
 		rxq_info->type_flags |= EFX_RXQ_FLAG_RSS_HASH;
 
 	rc = sfc_ev_qinit(sa, SFC_EVQ_TYPE_RX, sw_index,
@@ -1205,7 +1205,7 @@ sfc_rx_qinit(struct sfc_adapter *sa, sfc_sw_index_t sw_index,
 	rxq_info->refill_mb_pool = mb_pool;
 
 	if (rss->hash_support == EFX_RX_HASH_AVAILABLE && rss->channels > 0 &&
-	    (offloads & DEV_RX_OFFLOAD_RSS_HASH))
+	    (offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
 		rxq_info->rxq_flags = SFC_RXQ_FLAG_RSS_HASH;
 	else
 		rxq_info->rxq_flags = 0;
@@ -1301,19 +1301,19 @@ sfc_rx_qfini(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
  * Mapping between RTE RSS hash functions and their EFX counterparts.
  */
 static const struct sfc_rss_hf_rte_to_efx sfc_rss_hf_map[] = {
-	{ ETH_RSS_NONFRAG_IPV4_TCP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_TCP,
 	  EFX_RX_HASH(IPV4_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV4_UDP,
+	{ RTE_ETH_RSS_NONFRAG_IPV4_UDP,
 	  EFX_RX_HASH(IPV4_UDP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_IPV6_TCP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_TCP | RTE_ETH_RSS_IPV6_TCP_EX,
 	  EFX_RX_HASH(IPV6_TCP, 4TUPLE) },
-	{ ETH_RSS_NONFRAG_IPV6_UDP | ETH_RSS_IPV6_UDP_EX,
+	{ RTE_ETH_RSS_NONFRAG_IPV6_UDP | RTE_ETH_RSS_IPV6_UDP_EX,
 	  EFX_RX_HASH(IPV6_UDP, 4TUPLE) },
-	{ ETH_RSS_IPV4 | ETH_RSS_FRAG_IPV4 | ETH_RSS_NONFRAG_IPV4_OTHER,
+	{ RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_FRAG_IPV4 | RTE_ETH_RSS_NONFRAG_IPV4_OTHER,
 	  EFX_RX_HASH(IPV4_TCP, 2TUPLE) | EFX_RX_HASH(IPV4_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV4, 2TUPLE) },
-	{ ETH_RSS_IPV6 | ETH_RSS_FRAG_IPV6 | ETH_RSS_NONFRAG_IPV6_OTHER |
-	  ETH_RSS_IPV6_EX,
+	{ RTE_ETH_RSS_IPV6 | RTE_ETH_RSS_FRAG_IPV6 | RTE_ETH_RSS_NONFRAG_IPV6_OTHER |
+	  RTE_ETH_RSS_IPV6_EX,
 	  EFX_RX_HASH(IPV6_TCP, 2TUPLE) | EFX_RX_HASH(IPV6_UDP, 2TUPLE) |
 	  EFX_RX_HASH(IPV6, 2TUPLE) }
 };
@@ -1633,10 +1633,10 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	int rc = 0;
 
 	switch (rxmode->mq_mode) {
-	case ETH_MQ_RX_NONE:
+	case RTE_ETH_MQ_RX_NONE:
 		/* No special checks are required */
 		break;
-	case ETH_MQ_RX_RSS:
+	case RTE_ETH_MQ_RX_RSS:
 		if (rss->context_type == EFX_RX_SCALE_UNAVAILABLE) {
 			sfc_err(sa, "RSS is not available");
 			rc = EINVAL;
@@ -1653,16 +1653,16 @@ sfc_rx_check_mode(struct sfc_adapter *sa, struct rte_eth_rxmode *rxmode)
 	 * so unsupported offloads cannot be added as the result of
 	 * below check.
 	 */
-	if ((rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM) !=
-	    (offloads_supported & DEV_RX_OFFLOAD_CHECKSUM)) {
+	if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM) !=
+	    (offloads_supported & RTE_ETH_RX_OFFLOAD_CHECKSUM)) {
 		sfc_warn(sa, "Rx checksum offloads cannot be disabled - always on (IPv4/TCP/UDP)");
-		rxmode->offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_CHECKSUM;
 	}
 
-	if ((offloads_supported & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
-	    (~rxmode->offloads & DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
+	if ((offloads_supported & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM) &&
+	    (~rxmode->offloads & RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM)) {
 		sfc_warn(sa, "Rx outer IPv4 checksum offload cannot be disabled - always on");
-		rxmode->offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 	}
 
 	return rc;
@@ -1808,7 +1808,7 @@ sfc_rx_configure(struct sfc_adapter *sa)
 	}
 
 configure_rss:
-	rss->channels = (dev_conf->rxmode.mq_mode == ETH_MQ_RX_RSS) ?
+	rss->channels = (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) ?
 			 MIN(sas->ethdev_rxq_count, EFX_MAXRSS) : 0;
 
 	if (rss->channels > 0) {
diff --git a/drivers/net/sfc/sfc_tx.c b/drivers/net/sfc/sfc_tx.c
index 49b239f4d261..359acc71a47f 100644
--- a/drivers/net/sfc/sfc_tx.c
+++ b/drivers/net/sfc/sfc_tx.c
@@ -54,23 +54,23 @@ sfc_tx_get_offload_mask(struct sfc_adapter *sa)
 	uint64_t no_caps = 0;
 
 	if (!encp->enc_hw_tx_insert_vlan_enabled)
-		no_caps |= DEV_TX_OFFLOAD_VLAN_INSERT;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 
 	if (!encp->enc_tunnel_encapsulations_supported)
-		no_caps |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM;
+		no_caps |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 	if (!sa->tso)
-		no_caps |= DEV_TX_OFFLOAD_TCP_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_VXLAN)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_VXLAN_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO;
 
 	if (!sa->tso_encap ||
 	    (encp->enc_tunnel_encapsulations_supported &
 	     (1u << EFX_TUNNEL_PROTOCOL_GENEVE)) == 0)
-		no_caps |= DEV_TX_OFFLOAD_GENEVE_TNL_TSO;
+		no_caps |= RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO;
 
 	return ~no_caps;
 }
@@ -114,8 +114,8 @@ sfc_tx_qcheck_conf(struct sfc_adapter *sa, unsigned int txq_max_fill_level,
 	}
 
 	/* We either perform both TCP and UDP offload, or no offload at all */
-	if (((offloads & DEV_TX_OFFLOAD_TCP_CKSUM) == 0) !=
-	    ((offloads & DEV_TX_OFFLOAD_UDP_CKSUM) == 0)) {
+	if (((offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) == 0) !=
+	    ((offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) == 0)) {
 		sfc_err(sa, "TCP and UDP offloads can't be set independently");
 		rc = EINVAL;
 	}
@@ -309,7 +309,7 @@ sfc_tx_check_mode(struct sfc_adapter *sa, const struct rte_eth_txmode *txmode)
 	int rc = 0;
 
 	switch (txmode->mq_mode) {
-	case ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_NONE:
 		break;
 	default:
 		sfc_err(sa, "Tx multi-queue mode %u not supported",
@@ -515,23 +515,23 @@ sfc_tx_qstart(struct sfc_adapter *sa, sfc_sw_index_t sw_index)
 	if (rc != 0)
 		goto fail_ev_qstart;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_IPV4;
 
-	if (txq_info->offloads & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+	if (txq_info->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 		flags |= EFX_TXQ_CKSUM_INNER_IPV4;
 
-	if ((txq_info->offloads & DEV_TX_OFFLOAD_TCP_CKSUM) ||
-	    (txq_info->offloads & DEV_TX_OFFLOAD_UDP_CKSUM)) {
+	if ((txq_info->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) ||
+	    (txq_info->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM)) {
 		flags |= EFX_TXQ_CKSUM_TCPUDP;
 
-		if (offloads_supported & DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM)
+		if (offloads_supported & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM)
 			flags |= EFX_TXQ_CKSUM_INNER_TCPUDP;
 	}
 
-	if (txq_info->offloads & (DEV_TX_OFFLOAD_TCP_TSO |
-				  DEV_TX_OFFLOAD_VXLAN_TNL_TSO |
-				  DEV_TX_OFFLOAD_GENEVE_TNL_TSO))
+	if (txq_info->offloads & (RTE_ETH_TX_OFFLOAD_TCP_TSO |
+				  RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
+				  RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO))
 		flags |= EFX_TXQ_FATSOV2;
 
 	rc = efx_tx_qcreate(sa->nic, txq->hw_index, 0, &txq->mem,
@@ -862,9 +862,9 @@ sfc_efx_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
 
 		/*
 		 * Here VLAN TCI is expected to be zero in case if no
-		 * DEV_TX_OFFLOAD_VLAN_INSERT capability is advertised;
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT capability is advertised;
 		 * if the calling app ignores the absence of
-		 * DEV_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
+		 * RTE_ETH_TX_OFFLOAD_VLAN_INSERT and pushes VLAN TCI, then
 		 * TX_ERROR will occur
 		 */
 		pkt_descs += sfc_efx_tx_maybe_insert_tag(txq, m_seg, &pend);
@@ -1228,13 +1228,13 @@ struct sfc_dp_tx sfc_efx_tx = {
 		.hw_fw_caps	= SFC_DP_HW_FW_CAP_TX_EFX,
 	},
 	.features		= 0,
-	.dev_offload_capa	= DEV_TX_OFFLOAD_VLAN_INSERT |
-				  DEV_TX_OFFLOAD_MULTI_SEGS,
-	.queue_offload_capa	= DEV_TX_OFFLOAD_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_UDP_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_CKSUM |
-				  DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-				  DEV_TX_OFFLOAD_TCP_TSO,
+	.dev_offload_capa	= RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				  RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
+	.queue_offload_capa	= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+				  RTE_ETH_TX_OFFLOAD_TCP_TSO,
 	.qsize_up_rings		= sfc_efx_tx_qsize_up_rings,
 	.qcreate		= sfc_efx_tx_qcreate,
 	.qdestroy		= sfc_efx_tx_qdestroy,
diff --git a/drivers/net/softnic/rte_eth_softnic.c b/drivers/net/softnic/rte_eth_softnic.c
index b3b55b9035b1..3ef33818a9e0 100644
--- a/drivers/net/softnic/rte_eth_softnic.c
+++ b/drivers/net/softnic/rte_eth_softnic.c
@@ -173,7 +173,7 @@ pmd_dev_start(struct rte_eth_dev *dev)
 		return status;
 
 	/* Link UP */
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	return 0;
 }
@@ -184,7 +184,7 @@ pmd_dev_stop(struct rte_eth_dev *dev)
 	struct pmd_internals *p = dev->data->dev_private;
 
 	/* Link DOWN */
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	/* Firmware */
 	softnic_pipeline_disable_all(p);
@@ -386,10 +386,10 @@ pmd_ethdev_register(struct rte_vdev_device *vdev,
 
 	/* dev->data */
 	dev->data->dev_private = dev_private;
-	dev->data->dev_link.link_speed = ETH_SPEED_NUM_100G;
-	dev->data->dev_link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	dev->data->dev_link.link_autoneg = ETH_LINK_FIXED;
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	dev->data->dev_link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	dev->data->dev_link.link_autoneg = RTE_ETH_LINK_FIXED;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	dev->data->mac_addrs = &eth_addr;
 	dev->data->promiscuous = 1;
 	dev->data->numa_node = params->cpu_id;
diff --git a/drivers/net/szedata2/rte_eth_szedata2.c b/drivers/net/szedata2/rte_eth_szedata2.c
index 7416a6b1b816..255444a4181d 100644
--- a/drivers/net/szedata2/rte_eth_szedata2.c
+++ b/drivers/net/szedata2/rte_eth_szedata2.c
@@ -1042,7 +1042,7 @@ static int
 eth_dev_configure(struct rte_eth_dev *dev)
 {
 	struct rte_eth_dev_data *data = dev->data;
-	if (data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) {
+	if (data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SCATTER) {
 		dev->rx_pkt_burst = eth_szedata2_rx_scattered;
 		data->scattered_rx = 1;
 	} else {
@@ -1064,11 +1064,11 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_rx_queues = internals->max_rx_queues;
 	dev_info->max_tx_queues = internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_SCATTER;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_SCATTER;
 	dev_info->tx_offload_capa = 0;
 	dev_info->rx_queue_offload_capa = 0;
 	dev_info->tx_queue_offload_capa = 0;
-	dev_info->speed_capa = ETH_LINK_SPEED_100G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_100G;
 
 	return 0;
 }
@@ -1204,10 +1204,10 @@ eth_link_update(struct rte_eth_dev *dev,
 
 	memset(&link, 0, sizeof(link));
 
-	link.link_speed = ETH_SPEED_NUM_100G;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_status = ETH_LINK_UP;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_speed = RTE_ETH_SPEED_NUM_100G;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	rte_eth_linkstatus_set(dev, &link);
 	return 0;
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index c515de3bf71d..ad5980ef5280 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -70,16 +70,16 @@
 
 #define TAP_IOV_DEFAULT_MAX 1024
 
-#define TAP_RX_OFFLOAD (DEV_RX_OFFLOAD_SCATTER |	\
-			DEV_RX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_RX_OFFLOAD_UDP_CKSUM |	\
-			DEV_RX_OFFLOAD_TCP_CKSUM)
+#define TAP_RX_OFFLOAD (RTE_ETH_RX_OFFLOAD_SCATTER |	\
+			RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
 
-#define TAP_TX_OFFLOAD (DEV_TX_OFFLOAD_MULTI_SEGS |	\
-			DEV_TX_OFFLOAD_IPV4_CKSUM |	\
-			DEV_TX_OFFLOAD_UDP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_CKSUM |	\
-			DEV_TX_OFFLOAD_TCP_TSO)
+#define TAP_TX_OFFLOAD (RTE_ETH_TX_OFFLOAD_MULTI_SEGS |	\
+			RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+			RTE_ETH_TX_OFFLOAD_TCP_TSO)
 
 static int tap_devices_count;
 
@@ -97,10 +97,10 @@ static const char *valid_arguments[] = {
 static volatile uint32_t tap_trigger;	/* Rx trigger */
 
 static struct rte_eth_link pmd_link = {
-	.link_speed = ETH_SPEED_NUM_10G,
-	.link_duplex = ETH_LINK_FULL_DUPLEX,
-	.link_status = ETH_LINK_DOWN,
-	.link_autoneg = ETH_LINK_FIXED,
+	.link_speed = RTE_ETH_SPEED_NUM_10G,
+	.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+	.link_status = RTE_ETH_LINK_DOWN,
+	.link_autoneg = RTE_ETH_LINK_FIXED,
 };
 
 static void
@@ -433,7 +433,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 
 		len = readv(process_private->rxq_fds[rxq->queue_id],
 			*rxq->iovecs,
-			1 + (rxq->rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ?
+			1 + (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ?
 			     rxq->nb_rx_desc : 1));
 		if (len < (int)sizeof(struct tun_pi))
 			break;
@@ -489,7 +489,7 @@ pmd_rx_burst(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		seg->next = NULL;
 		mbuf->packet_type = rte_net_get_ptype(mbuf, NULL,
 						      RTE_PTYPE_ALL_MASK);
-		if (rxq->rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		if (rxq->rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 			tap_verify_csum(mbuf);
 
 		/* account for the receive frame */
@@ -866,7 +866,7 @@ tap_link_set_down(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 0, LOCAL_ONLY);
 }
 
@@ -876,7 +876,7 @@ tap_link_set_up(struct rte_eth_dev *dev)
 	struct pmd_internals *pmd = dev->data->dev_private;
 	struct ifreq ifr = { .ifr_flags = IFF_UP };
 
-	dev->data->dev_link.link_status = ETH_LINK_UP;
+	dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 	return tap_ioctl(pmd, SIOCSIFFLAGS, &ifr, 1, LOCAL_AND_REMOTE);
 }
 
@@ -956,30 +956,30 @@ tap_dev_speed_capa(void)
 	uint32_t speed = pmd_link.link_speed;
 	uint32_t capa = 0;
 
-	if (speed >= ETH_SPEED_NUM_10M)
-		capa |= ETH_LINK_SPEED_10M;
-	if (speed >= ETH_SPEED_NUM_100M)
-		capa |= ETH_LINK_SPEED_100M;
-	if (speed >= ETH_SPEED_NUM_1G)
-		capa |= ETH_LINK_SPEED_1G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_2_5G;
-	if (speed >= ETH_SPEED_NUM_5G)
-		capa |= ETH_LINK_SPEED_5G;
-	if (speed >= ETH_SPEED_NUM_10G)
-		capa |= ETH_LINK_SPEED_10G;
-	if (speed >= ETH_SPEED_NUM_20G)
-		capa |= ETH_LINK_SPEED_20G;
-	if (speed >= ETH_SPEED_NUM_25G)
-		capa |= ETH_LINK_SPEED_25G;
-	if (speed >= ETH_SPEED_NUM_40G)
-		capa |= ETH_LINK_SPEED_40G;
-	if (speed >= ETH_SPEED_NUM_50G)
-		capa |= ETH_LINK_SPEED_50G;
-	if (speed >= ETH_SPEED_NUM_56G)
-		capa |= ETH_LINK_SPEED_56G;
-	if (speed >= ETH_SPEED_NUM_100G)
-		capa |= ETH_LINK_SPEED_100G;
+	if (speed >= RTE_ETH_SPEED_NUM_10M)
+		capa |= RTE_ETH_LINK_SPEED_10M;
+	if (speed >= RTE_ETH_SPEED_NUM_100M)
+		capa |= RTE_ETH_LINK_SPEED_100M;
+	if (speed >= RTE_ETH_SPEED_NUM_1G)
+		capa |= RTE_ETH_LINK_SPEED_1G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_2_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_5G)
+		capa |= RTE_ETH_LINK_SPEED_5G;
+	if (speed >= RTE_ETH_SPEED_NUM_10G)
+		capa |= RTE_ETH_LINK_SPEED_10G;
+	if (speed >= RTE_ETH_SPEED_NUM_20G)
+		capa |= RTE_ETH_LINK_SPEED_20G;
+	if (speed >= RTE_ETH_SPEED_NUM_25G)
+		capa |= RTE_ETH_LINK_SPEED_25G;
+	if (speed >= RTE_ETH_SPEED_NUM_40G)
+		capa |= RTE_ETH_LINK_SPEED_40G;
+	if (speed >= RTE_ETH_SPEED_NUM_50G)
+		capa |= RTE_ETH_LINK_SPEED_50G;
+	if (speed >= RTE_ETH_SPEED_NUM_56G)
+		capa |= RTE_ETH_LINK_SPEED_56G;
+	if (speed >= RTE_ETH_SPEED_NUM_100G)
+		capa |= RTE_ETH_LINK_SPEED_100G;
 
 	return capa;
 }
@@ -1196,15 +1196,15 @@ tap_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
 		tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, REMOTE_ONLY);
 		if (!(ifr.ifr_flags & IFF_UP) ||
 		    !(ifr.ifr_flags & IFF_RUNNING)) {
-			dev_link->link_status = ETH_LINK_DOWN;
+			dev_link->link_status = RTE_ETH_LINK_DOWN;
 			return 0;
 		}
 	}
 	tap_ioctl(pmd, SIOCGIFFLAGS, &ifr, 0, LOCAL_ONLY);
 	dev_link->link_status =
 		((ifr.ifr_flags & IFF_UP) && (ifr.ifr_flags & IFF_RUNNING) ?
-		 ETH_LINK_UP :
-		 ETH_LINK_DOWN);
+		 RTE_ETH_LINK_UP :
+		 RTE_ETH_LINK_DOWN);
 	return 0;
 }
 
@@ -1391,7 +1391,7 @@ tap_gso_ctx_setup(struct rte_gso_ctx *gso_ctx, struct rte_eth_dev *dev)
 	int ret;
 
 	/* initialize GSO context */
-	gso_types = DEV_TX_OFFLOAD_TCP_TSO;
+	gso_types = RTE_ETH_TX_OFFLOAD_TCP_TSO;
 	if (!pmd->gso_ctx_mp) {
 		/*
 		 * Create private mbuf pool with TAP_GSO_MBUF_SEG_SIZE
@@ -1606,9 +1606,9 @@ tap_tx_queue_setup(struct rte_eth_dev *dev,
 
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->csum = !!(offloads &
-			(DEV_TX_OFFLOAD_IPV4_CKSUM |
-			 DEV_TX_OFFLOAD_UDP_CKSUM |
-			 DEV_TX_OFFLOAD_TCP_CKSUM));
+			(RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			 RTE_ETH_TX_OFFLOAD_TCP_CKSUM));
 
 	ret = tap_setup_queue(dev, internals, tx_queue_id, 0);
 	if (ret == -1)
@@ -1765,7 +1765,7 @@ static int
 tap_flow_ctrl_get(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	fc_conf->mode = RTE_FC_NONE;
+	fc_conf->mode = RTE_ETH_FC_NONE;
 	return 0;
 }
 
@@ -1773,7 +1773,7 @@ static int
 tap_flow_ctrl_set(struct rte_eth_dev *dev __rte_unused,
 		  struct rte_eth_fc_conf *fc_conf)
 {
-	if (fc_conf->mode != RTE_FC_NONE)
+	if (fc_conf->mode != RTE_ETH_FC_NONE)
 		return -ENOTSUP;
 	return 0;
 }
@@ -2267,7 +2267,7 @@ rte_pmd_tun_probe(struct rte_vdev_device *dev)
 			}
 		}
 	}
-	pmd_link.link_speed = ETH_SPEED_NUM_10G;
+	pmd_link.link_speed = RTE_ETH_SPEED_NUM_10G;
 
 	TAP_LOG(DEBUG, "Initializing pmd_tun for %s", name);
 
@@ -2441,7 +2441,7 @@ rte_pmd_tap_probe(struct rte_vdev_device *dev)
 		return 0;
 	}
 
-	speed = ETH_SPEED_NUM_10G;
+	speed = RTE_ETH_SPEED_NUM_10G;
 
 	/* use tap%d which causes kernel to choose next available */
 	strlcpy(tap_name, DEFAULT_TAP_NAME "%d", RTE_ETH_NAME_MAX_LEN);
diff --git a/drivers/net/tap/tap_rss.h b/drivers/net/tap/tap_rss.h
index 176e7180bdaa..48c151cf6b68 100644
--- a/drivers/net/tap/tap_rss.h
+++ b/drivers/net/tap/tap_rss.h
@@ -13,7 +13,7 @@
 #define TAP_RSS_HASH_KEY_SIZE 40
 
 /* Supported RSS */
-#define TAP_RSS_HF_MASK (~(ETH_RSS_IP | ETH_RSS_UDP | ETH_RSS_TCP))
+#define TAP_RSS_HF_MASK (~(RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP | RTE_ETH_RSS_TCP))
 
 /* hashed fields for RSS */
 enum hash_field {
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index fc1844ddfce1..8d02fbae7274 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -61,14 +61,14 @@ nicvf_link_status_update(struct nicvf *nic,
 {
 	memset(link, 0, sizeof(*link));
 
-	link->link_status = nic->link_up ? ETH_LINK_UP : ETH_LINK_DOWN;
+	link->link_status = nic->link_up ? RTE_ETH_LINK_UP : RTE_ETH_LINK_DOWN;
 
 	if (nic->duplex == NICVF_HALF_DUPLEX)
-		link->link_duplex = ETH_LINK_HALF_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
 	else if (nic->duplex == NICVF_FULL_DUPLEX)
-		link->link_duplex = ETH_LINK_FULL_DUPLEX;
+		link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	link->link_speed = nic->speed;
-	link->link_autoneg = ETH_LINK_AUTONEG;
+	link->link_autoneg = RTE_ETH_LINK_AUTONEG;
 }
 
 static void
@@ -134,7 +134,7 @@ nicvf_dev_link_update(struct rte_eth_dev *dev, int wait_to_complete)
 		/* rte_eth_link_get() might need to wait up to 9 seconds */
 		for (i = 0; i < MAX_CHECK_TIME; i++) {
 			nicvf_link_status_update(nic, &link);
-			if (link.link_status == ETH_LINK_UP)
+			if (link.link_status == RTE_ETH_LINK_UP)
 				break;
 			rte_delay_ms(CHECK_INTERVAL);
 		}
@@ -177,9 +177,9 @@ nicvf_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu)
 		return -EINVAL;
 
 	if (frame_size > NIC_HW_L2_MAX_LEN)
-		rxmode->offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		rxmode->offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	if (nicvf_mbox_update_hw_max_frs(nic, mtu))
 		return -EINVAL;
@@ -404,35 +404,35 @@ nicvf_rss_ethdev_to_nic(struct nicvf *nic, uint64_t ethdev_rss)
 {
 	uint64_t nic_rss = 0;
 
-	if (ethdev_rss & ETH_RSS_IPV4)
+	if (ethdev_rss & RTE_ETH_RSS_IPV4)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_IPV6)
+	if (ethdev_rss & RTE_ETH_RSS_IPV6)
 		nic_rss |= RSS_IP_ENA;
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		nic_rss |= (RSS_IP_ENA | RSS_UDP_ENA);
 
-	if (ethdev_rss & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (ethdev_rss & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		nic_rss |= (RSS_IP_ENA | RSS_TCP_ENA);
 
-	if (ethdev_rss & ETH_RSS_PORT)
+	if (ethdev_rss & RTE_ETH_RSS_PORT)
 		nic_rss |= RSS_L2_EXTENDED_HASH_ENA;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
-		if (ethdev_rss & ETH_RSS_VXLAN)
+		if (ethdev_rss & RTE_ETH_RSS_VXLAN)
 			nic_rss |= RSS_TUN_VXLAN_ENA;
 
-		if (ethdev_rss & ETH_RSS_GENEVE)
+		if (ethdev_rss & RTE_ETH_RSS_GENEVE)
 			nic_rss |= RSS_TUN_GENEVE_ENA;
 
-		if (ethdev_rss & ETH_RSS_NVGRE)
+		if (ethdev_rss & RTE_ETH_RSS_NVGRE)
 			nic_rss |= RSS_TUN_NVGRE_ENA;
 	}
 
@@ -445,28 +445,28 @@ nicvf_rss_nic_to_ethdev(struct nicvf *nic,  uint64_t nic_rss)
 	uint64_t ethdev_rss = 0;
 
 	if (nic_rss & RSS_IP_ENA)
-		ethdev_rss |= (ETH_RSS_IPV4 | ETH_RSS_IPV6);
+		ethdev_rss |= (RTE_ETH_RSS_IPV4 | RTE_ETH_RSS_IPV6);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_TCP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_TCP |
-				ETH_RSS_NONFRAG_IPV6_TCP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_TCP |
+				RTE_ETH_RSS_NONFRAG_IPV6_TCP);
 
 	if ((nic_rss & RSS_IP_ENA) && (nic_rss & RSS_UDP_ENA))
-		ethdev_rss |= (ETH_RSS_NONFRAG_IPV4_UDP |
-				ETH_RSS_NONFRAG_IPV6_UDP);
+		ethdev_rss |= (RTE_ETH_RSS_NONFRAG_IPV4_UDP |
+				RTE_ETH_RSS_NONFRAG_IPV6_UDP);
 
 	if (nic_rss & RSS_L2_EXTENDED_HASH_ENA)
-		ethdev_rss |= ETH_RSS_PORT;
+		ethdev_rss |= RTE_ETH_RSS_PORT;
 
 	if (nicvf_hw_cap(nic) & NICVF_CAP_TUNNEL_PARSING) {
 		if (nic_rss & RSS_TUN_VXLAN_ENA)
-			ethdev_rss |= ETH_RSS_VXLAN;
+			ethdev_rss |= RTE_ETH_RSS_VXLAN;
 
 		if (nic_rss & RSS_TUN_GENEVE_ENA)
-			ethdev_rss |= ETH_RSS_GENEVE;
+			ethdev_rss |= RTE_ETH_RSS_GENEVE;
 
 		if (nic_rss & RSS_TUN_NVGRE_ENA)
-			ethdev_rss |= ETH_RSS_NVGRE;
+			ethdev_rss |= RTE_ETH_RSS_NVGRE;
 	}
 	return ethdev_rss;
 }
@@ -493,8 +493,8 @@ nicvf_dev_reta_query(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				reta_conf[i].reta[j] = tbl[j];
 	}
@@ -523,8 +523,8 @@ nicvf_dev_reta_update(struct rte_eth_dev *dev,
 		return ret;
 
 	/* Copy RETA table */
-	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_RETA_GROUP_SIZE); i++) {
-		for (j = 0; j < RTE_RETA_GROUP_SIZE; j++)
+	for (i = 0; i < (NIC_MAX_RSS_IDR_TBL_SIZE / RTE_ETH_RETA_GROUP_SIZE); i++) {
+		for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
 			if ((reta_conf[i].mask >> j) & 0x01)
 				tbl[j] = reta_conf[i].reta[j];
 	}
@@ -821,9 +821,9 @@ nicvf_configure_rss(struct rte_eth_dev *dev)
 		    dev->data->nb_rx_queues,
 		    dev->data->dev_conf.lpbk_mode, rsshf);
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_NONE)
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_NONE)
 		ret = nicvf_rss_term(nic);
-	else if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS)
+	else if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
 		ret = nicvf_rss_config(nic, dev->data->nb_rx_queues, rsshf);
 	if (ret)
 		PMD_INIT_LOG(ERR, "Failed to configure RSS %d", ret);
@@ -884,7 +884,7 @@ nicvf_set_tx_function(struct rte_eth_dev *dev)
 
 	for (i = 0; i < dev->data->nb_tx_queues; i++) {
 		txq = dev->data->tx_queues[i];
-		if (txq->offloads & DEV_TX_OFFLOAD_MULTI_SEGS) {
+		if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) {
 			multiseg = true;
 			break;
 		}
@@ -1007,7 +1007,7 @@ nicvf_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
 	offloads = tx_conf->offloads | dev->data->dev_conf.txmode.offloads;
 	txq->offloads = offloads;
 
-	is_single_pool = !!(offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE);
+	is_single_pool = !!(offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE);
 
 	/* Choose optimum free threshold value for multipool case */
 	if (!is_single_pool) {
@@ -1397,11 +1397,11 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	PMD_INIT_FUNC_TRACE();
 
 	/* Autonegotiation may be disabled */
-	dev_info->speed_capa = ETH_LINK_SPEED_FIXED;
-	dev_info->speed_capa |= ETH_LINK_SPEED_10M | ETH_LINK_SPEED_100M |
-				 ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_10M | RTE_ETH_LINK_SPEED_100M |
+				 RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
 	if (nicvf_hw_version(nic) != PCI_SUB_DEVICE_ID_CN81XX_NICVF)
-		dev_info->speed_capa |= ETH_LINK_SPEED_40G;
+		dev_info->speed_capa |= RTE_ETH_LINK_SPEED_40G;
 
 	dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU;
 	dev_info->max_rx_pktlen = NIC_HW_MAX_MTU + RTE_ETHER_HDR_LEN;
@@ -1430,10 +1430,10 @@ nicvf_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 
 	dev_info->default_txconf = (struct rte_eth_txconf) {
 		.tx_free_thresh = NICVF_DEFAULT_TX_FREE_THRESH,
-		.offloads = DEV_TX_OFFLOAD_MBUF_FAST_FREE |
-			DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
-			DEV_TX_OFFLOAD_UDP_CKSUM          |
-			DEV_TX_OFFLOAD_TCP_CKSUM,
+		.offloads = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE |
+			RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM   |
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM          |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM,
 	};
 
 	return 0;
@@ -1597,8 +1597,8 @@ nicvf_vf_start(struct rte_eth_dev *dev, struct nicvf *nic, uint32_t rbdrsz)
 		     nic->rbdr->tail, nb_rbdr_desc, nic->vf_id);
 
 	/* Configure VLAN Strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	ret = nicvf_vlan_offload_config(dev, mask);
 
 	/* Based on the packet type(IPv4 or IPv6), the nicvf HW aligns L3 data
@@ -1727,11 +1727,11 @@ nicvf_dev_start(struct rte_eth_dev *dev)
 	if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
 					    2 * VLAN_TAG_SIZE > buffsz)
 		dev->data->scattered_rx = 1;
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER) != 0)
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER) != 0)
 		dev->data->scattered_rx = 1;
 
 	/* Setup MTU based on max_rx_pkt_len or default */
-	mtu = dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME ?
+	mtu = dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME ?
 		dev->data->dev_conf.rxmode.max_rx_pkt_len
 			-  RTE_ETHER_HDR_LEN : RTE_ETHER_MTU;
 
@@ -1914,8 +1914,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (rxmode->mq_mode & ETH_MQ_RX_RSS_FLAG)
-		rxmode->offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (!rte_eal_has_hugepages()) {
 		PMD_INIT_LOG(INFO, "Huge page is not configured");
@@ -1927,8 +1927,8 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE &&
-		rxmode->mq_mode != ETH_MQ_RX_RSS) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
+		rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
 		PMD_INIT_LOG(INFO, "Unsupported rx qmode %d", rxmode->mq_mode);
 		return -EINVAL;
 	}
@@ -1938,7 +1938,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		return -EINVAL;
 	}
 
-	if (conf->link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(INFO, "Setting link speed/duplex not supported");
 		return -EINVAL;
 	}
@@ -1973,7 +1973,7 @@ nicvf_dev_configure(struct rte_eth_dev *dev)
 		}
 	}
 
-	if (rxmode->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		nic->offload_cksum = 1;
 
 	PMD_INIT_LOG(DEBUG, "Configured ethdev port%d hwcap=0x%" PRIx64,
@@ -2050,8 +2050,8 @@ nicvf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct nicvf *nic = nicvf_pmd_priv(dev);
 	rxmode = &dev->data->dev_conf.rxmode;
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			nicvf_vlan_hw_strip(nic, true);
 		else
 			nicvf_vlan_hw_strip(nic, false);
diff --git a/drivers/net/thunderx/nicvf_ethdev.h b/drivers/net/thunderx/nicvf_ethdev.h
index b8dd905d0bd6..c1876bb9e1b7 100644
--- a/drivers/net/thunderx/nicvf_ethdev.h
+++ b/drivers/net/thunderx/nicvf_ethdev.h
@@ -16,33 +16,33 @@
 #define NICVF_UNKNOWN_DUPLEX		0xff
 
 #define NICVF_RSS_OFFLOAD_PASS1 ( \
-	ETH_RSS_PORT | \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_PORT | \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define NICVF_RSS_OFFLOAD_TUNNEL ( \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
 
 #define NICVF_TX_OFFLOAD_CAPA ( \
-	DEV_TX_OFFLOAD_IPV4_CKSUM       | \
-	DEV_TX_OFFLOAD_UDP_CKSUM        | \
-	DEV_TX_OFFLOAD_TCP_CKSUM        | \
-	DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
-	DEV_TX_OFFLOAD_MBUF_FAST_FREE   | \
-	DEV_TX_OFFLOAD_MULTI_SEGS)
+	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM       | \
+	RTE_ETH_TX_OFFLOAD_UDP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_TCP_CKSUM        | \
+	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE   | \
+	RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define NICVF_RX_OFFLOAD_CAPA ( \
-	DEV_RX_OFFLOAD_CHECKSUM    | \
-	DEV_RX_OFFLOAD_VLAN_STRIP  | \
-	DEV_RX_OFFLOAD_JUMBO_FRAME | \
-	DEV_RX_OFFLOAD_SCATTER     | \
-	DEV_RX_OFFLOAD_RSS_HASH)
+	RTE_ETH_RX_OFFLOAD_CHECKSUM    | \
+	RTE_ETH_RX_OFFLOAD_VLAN_STRIP  | \
+	RTE_ETH_RX_OFFLOAD_JUMBO_FRAME | \
+	RTE_ETH_RX_OFFLOAD_SCATTER     | \
+	RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 #define NICVF_DEFAULT_RX_FREE_THRESH    224
 #define NICVF_DEFAULT_TX_FREE_THRESH    224
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 006399468841..c6e8a14ddf3f 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -997,7 +997,7 @@ txgbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
 	rxbal = rd32(hw, TXGBE_RXBAL(rxq->reg_idx));
 	rxbah = rd32(hw, TXGBE_RXBAH(rxq->reg_idx));
 	rxcfg = rd32(hw, TXGBE_RXCFG(rxq->reg_idx));
-	if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP) {
+	if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
 		restart = (rxcfg & TXGBE_RXCFG_ENA) &&
 			!(rxcfg & TXGBE_RXCFG_VLAN);
 		rxcfg |= TXGBE_RXCFG_VLAN;
@@ -1032,7 +1032,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 	vlan_ext = (portctrl & TXGBE_PORTCTL_VLANEXT);
 	qinq = vlan_ext && (portctrl & TXGBE_PORTCTL_QINQ);
 	switch (vlan_type) {
-	case ETH_VLAN_TYPE_INNER:
+	case RTE_ETH_VLAN_TYPE_INNER:
 		if (vlan_ext) {
 			wr32m(hw, TXGBE_VLANCTL,
 				TXGBE_VLANCTL_TPID_MASK,
@@ -1052,7 +1052,7 @@ txgbe_vlan_tpid_set(struct rte_eth_dev *dev,
 				TXGBE_TAGTPID_LSB(tpid));
 		}
 		break;
-	case ETH_VLAN_TYPE_OUTER:
+	case RTE_ETH_VLAN_TYPE_OUTER:
 		if (vlan_ext) {
 			/* Only the high 16-bits is valid */
 			wr32m(hw, TXGBE_EXTAG,
@@ -1137,10 +1137,10 @@ txgbe_vlan_hw_strip_bitmap_set(struct rte_eth_dev *dev, uint16_t queue, bool on)
 
 	if (on) {
 		rxq->vlan_flags = PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED;
-		rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	} else {
 		rxq->vlan_flags = PKT_RX_VLAN;
-		rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+		rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 }
 
@@ -1239,7 +1239,7 @@ txgbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
 		rxq = dev->data->rx_queues[i];
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			txgbe_vlan_strip_queue_set(dev, i, 1);
 		else
 			txgbe_vlan_strip_queue_set(dev, i, 0);
@@ -1253,17 +1253,17 @@ txgbe_config_vlan_strip_on_all_queues(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	struct txgbe_rx_queue *rxq;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		rxmode = &dev->data->dev_conf.rxmode;
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 		else
 			for (i = 0; i < dev->data->nb_rx_queues; i++) {
 				rxq = dev->data->rx_queues[i];
-				rxq->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+				rxq->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 			}
 	}
 }
@@ -1274,25 +1274,25 @@ txgbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	struct rte_eth_rxmode *rxmode;
 	rxmode = &dev->data->dev_conf.rxmode;
 
-	if (mask & ETH_VLAN_STRIP_MASK)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
 		txgbe_vlan_hw_strip_config(dev);
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			txgbe_vlan_hw_filter_enable(dev);
 		else
 			txgbe_vlan_hw_filter_disable(dev);
 	}
 
-	if (mask & ETH_VLAN_EXTEND_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+	if (mask & RTE_ETH_VLAN_EXTEND_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
 			txgbe_vlan_hw_extend_enable(dev);
 		else
 			txgbe_vlan_hw_extend_disable(dev);
 	}
 
-	if (mask & ETH_QINQ_STRIP_MASK) {
-		if (rxmode->offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
+	if (mask & RTE_ETH_QINQ_STRIP_MASK) {
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
 			txgbe_qinq_hw_strip_enable(dev);
 		else
 			txgbe_qinq_hw_strip_disable(dev);
@@ -1330,10 +1330,10 @@ txgbe_check_vf_rss_rxq_num(struct rte_eth_dev *dev, uint16_t nb_rx_q)
 	switch (nb_rx_q) {
 	case 1:
 	case 2:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_64_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_64_POOLS;
 		break;
 	case 4:
-		RTE_ETH_DEV_SRIOV(dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(dev).active = RTE_ETH_32_POOLS;
 		break;
 	default:
 		return -EINVAL;
@@ -1356,18 +1356,18 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 	if (RTE_ETH_DEV_SRIOV(dev).active != 0) {
 		/* check multi-queue mode */
 		switch (dev_conf->rxmode.mq_mode) {
-		case ETH_MQ_RX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_RX_VMDQ_DCB mode supported in SRIOV");
 			break;
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
 			/* DCB/RSS VMDQ in SRIOV mode, not implement yet */
 			PMD_INIT_LOG(ERR, "SRIOV active,"
 					" unsupported mq_mode rx %d.",
 					dev_conf->rxmode.mq_mode);
 			return -EINVAL;
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
-			dev->data->dev_conf.rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
+			dev->data->dev_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
 			if (nb_rx_q <= RTE_ETH_DEV_SRIOV(dev).nb_q_per_pool)
 				if (txgbe_check_vf_rss_rxq_num(dev, nb_rx_q)) {
 					PMD_INIT_LOG(ERR, "SRIOV is active,"
@@ -1377,13 +1377,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 					return -EINVAL;
 				}
 			break;
-		case ETH_MQ_RX_VMDQ_ONLY:
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_NONE:
 			/* if nothing mq mode configure, use default scheme */
 			dev->data->dev_conf.rxmode.mq_mode =
-				ETH_MQ_RX_VMDQ_ONLY;
+				RTE_ETH_MQ_RX_VMDQ_ONLY;
 			break;
-		default: /* ETH_MQ_RX_DCB, ETH_MQ_RX_DCB_RSS or ETH_MQ_TX_DCB*/
+		default: /* RTE_ETH_MQ_RX_DCB, RTE_ETH_MQ_RX_DCB_RSS or RTE_ETH_MQ_TX_DCB*/
 			/* SRIOV only works in VMDq enable mode */
 			PMD_INIT_LOG(ERR, "SRIOV is active,"
 					" wrong mq_mode rx %d.",
@@ -1392,13 +1392,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 		}
 
 		switch (dev_conf->txmode.mq_mode) {
-		case ETH_MQ_TX_VMDQ_DCB:
-			PMD_INIT_LOG(INFO, "ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
-			dev->data->dev_conf.txmode.mq_mode = ETH_MQ_TX_VMDQ_DCB;
+		case RTE_ETH_MQ_TX_VMDQ_DCB:
+			PMD_INIT_LOG(INFO, "RTE_ETH_MQ_TX_VMDQ_DCB mode supported in SRIOV");
+			dev->data->dev_conf.txmode.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB;
 			break;
-		default: /* ETH_MQ_TX_VMDQ_ONLY or ETH_MQ_TX_NONE */
+		default: /* RTE_ETH_MQ_TX_VMDQ_ONLY or RTE_ETH_MQ_TX_NONE */
 			dev->data->dev_conf.txmode.mq_mode =
-				ETH_MQ_TX_VMDQ_ONLY;
+				RTE_ETH_MQ_TX_VMDQ_ONLY;
 			break;
 		}
 
@@ -1413,13 +1413,13 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 			return -EINVAL;
 		}
 	} else {
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB_RSS) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB_RSS) {
 			PMD_INIT_LOG(ERR, "VMDQ+DCB+RSS mq_mode is"
 					  " not supported.");
 			return -EINVAL;
 		}
 		/* check configuration for vmdb+dcb mode */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_VMDQ_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_conf *conf;
 
 			if (nb_rx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1428,15 +1428,15 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->rx_adv_conf.vmdq_dcb_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools must be %d or %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 			const struct rte_eth_vmdq_dcb_tx_conf *conf;
 
 			if (nb_tx_q != TXGBE_VMDQ_DCB_NB_QUEUES) {
@@ -1445,39 +1445,39 @@ txgbe_check_mq_mode(struct rte_eth_dev *dev)
 				return -EINVAL;
 			}
 			conf = &dev_conf->tx_adv_conf.vmdq_dcb_tx_conf;
-			if (!(conf->nb_queue_pools == ETH_16_POOLS ||
-			       conf->nb_queue_pools == ETH_32_POOLS)) {
+			if (!(conf->nb_queue_pools == RTE_ETH_16_POOLS ||
+			       conf->nb_queue_pools == RTE_ETH_32_POOLS)) {
 				PMD_INIT_LOG(ERR, "VMDQ+DCB selected,"
 						" nb_queue_pools != %d and"
 						" nb_queue_pools != %d.",
-						ETH_16_POOLS, ETH_32_POOLS);
+						RTE_ETH_16_POOLS, RTE_ETH_32_POOLS);
 				return -EINVAL;
 			}
 		}
 
 		/* For DCB mode check our configuration before we go further */
-		if (dev_conf->rxmode.mq_mode == ETH_MQ_RX_DCB) {
+		if (dev_conf->rxmode.mq_mode == RTE_ETH_MQ_RX_DCB) {
 			const struct rte_eth_dcb_rx_conf *conf;
 
 			conf = &dev_conf->rx_adv_conf.dcb_rx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
 
-		if (dev_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+		if (dev_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 			const struct rte_eth_dcb_tx_conf *conf;
 
 			conf = &dev_conf->tx_adv_conf.dcb_tx_conf;
-			if (!(conf->nb_tcs == ETH_4_TCS ||
-			       conf->nb_tcs == ETH_8_TCS)) {
+			if (!(conf->nb_tcs == RTE_ETH_4_TCS ||
+			       conf->nb_tcs == RTE_ETH_8_TCS)) {
 				PMD_INIT_LOG(ERR, "DCB selected, nb_tcs != %d"
 						" and nb_tcs != %d.",
-						ETH_4_TCS, ETH_8_TCS);
+						RTE_ETH_4_TCS, RTE_ETH_8_TCS);
 				return -EINVAL;
 			}
 		}
@@ -1494,8 +1494,8 @@ txgbe_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/* multiple queue mode checking */
 	ret  = txgbe_check_mq_mode(dev);
@@ -1637,7 +1637,7 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	 *    - half duplex (checked afterwards for valid speeds)
 	 *    - fixed speed: TODO implement
 	 */
-	if (dev->data->dev_conf.link_speeds & ETH_LINK_SPEED_FIXED) {
+	if (dev->data->dev_conf.link_speeds & RTE_ETH_LINK_SPEED_FIXED) {
 		PMD_INIT_LOG(ERR,
 		"Invalid link_speeds for port %u, fix speed not supported",
 				dev->data->port_id);
@@ -1704,15 +1704,15 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 		goto error;
 	}
 
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbe_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload");
 		goto error;
 	}
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_VMDQ_ONLY) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_VMDQ_ONLY) {
 		/* Enable vlan filtering for VMDq */
 		txgbe_vmdq_vlan_hw_filter_enable(dev);
 	}
@@ -1773,8 +1773,8 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	if (err)
 		goto error;
 
-	allowed_speeds = ETH_LINK_SPEED_100M | ETH_LINK_SPEED_1G |
-			ETH_LINK_SPEED_10G;
+	allowed_speeds = RTE_ETH_LINK_SPEED_100M | RTE_ETH_LINK_SPEED_1G |
+			RTE_ETH_LINK_SPEED_10G;
 
 	link_speeds = &dev->data->dev_conf.link_speeds;
 	if (*link_speeds & ~allowed_speeds) {
@@ -1783,20 +1783,20 @@ txgbe_dev_start(struct rte_eth_dev *dev)
 	}
 
 	speed = 0x0;
-	if (*link_speeds == ETH_LINK_SPEED_AUTONEG) {
+	if (*link_speeds == RTE_ETH_LINK_SPEED_AUTONEG) {
 		speed = (TXGBE_LINK_SPEED_100M_FULL |
 			 TXGBE_LINK_SPEED_1GB_FULL |
 			 TXGBE_LINK_SPEED_10GB_FULL);
 	} else {
-		if (*link_speeds & ETH_LINK_SPEED_10G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_10G)
 			speed |= TXGBE_LINK_SPEED_10GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_5G)
 			speed |= TXGBE_LINK_SPEED_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_2_5G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_2_5G)
 			speed |= TXGBE_LINK_SPEED_2_5GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_1G)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_1G)
 			speed |= TXGBE_LINK_SPEED_1GB_FULL;
-		if (*link_speeds & ETH_LINK_SPEED_100M)
+		if (*link_speeds & RTE_ETH_LINK_SPEED_100M)
 			speed |= TXGBE_LINK_SPEED_100M_FULL;
 	}
 
@@ -2611,7 +2611,7 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->vmdq_queue_num = dev_info->max_rx_queues;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
@@ -2644,11 +2644,11 @@ txgbe_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->tx_desc_lim = tx_desc_lim;
 
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
-	dev_info->speed_capa = ETH_LINK_SPEED_1G | ETH_LINK_SPEED_10G;
-	dev_info->speed_capa |= ETH_LINK_SPEED_100M;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G;
+	dev_info->speed_capa |= RTE_ETH_LINK_SPEED_100M;
 
 	/* Driver-preferred Rx/Tx parameters */
 	dev_info->default_rxportconf.burst_size = 32;
@@ -2705,10 +2705,10 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	int wait = 1;
 
 	memset(&link, 0, sizeof(link));
-	link.link_status = ETH_LINK_DOWN;
-	link.link_speed = ETH_SPEED_NUM_NONE;
-	link.link_duplex = ETH_LINK_HALF_DUPLEX;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_status = RTE_ETH_LINK_DOWN;
+	link.link_speed = RTE_ETH_SPEED_NUM_NONE;
+	link.link_duplex = RTE_ETH_LINK_HALF_DUPLEX;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	hw->mac.get_link_status = true;
 
@@ -2722,8 +2722,8 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	err = hw->mac.check_link(hw, &link_speed, &link_up, wait);
 
 	if (err != 0) {
-		link.link_speed = ETH_SPEED_NUM_100M;
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 		return rte_eth_linkstatus_set(dev, &link);
 	}
 
@@ -2742,34 +2742,34 @@ txgbe_dev_link_update_share(struct rte_eth_dev *dev,
 	}
 
 	intr->flags &= ~TXGBE_FLAG_NEED_LINK_CONFIG;
-	link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
+	link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
 
 	switch (link_speed) {
 	default:
 	case TXGBE_LINK_SPEED_UNKNOWN:
-		link.link_duplex = ETH_LINK_FULL_DUPLEX;
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_100M_FULL:
-		link.link_speed = ETH_SPEED_NUM_100M;
+		link.link_speed = RTE_ETH_SPEED_NUM_100M;
 		break;
 
 	case TXGBE_LINK_SPEED_1GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_1G;
+		link.link_speed = RTE_ETH_SPEED_NUM_1G;
 		break;
 
 	case TXGBE_LINK_SPEED_2_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_2_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_2_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_5GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_5G;
+		link.link_speed = RTE_ETH_SPEED_NUM_5G;
 		break;
 
 	case TXGBE_LINK_SPEED_10GB_FULL:
-		link.link_speed = ETH_SPEED_NUM_10G;
+		link.link_speed = RTE_ETH_SPEED_NUM_10G;
 		break;
 	}
 
@@ -2994,7 +2994,7 @@ txgbe_dev_link_status_print(struct rte_eth_dev *dev)
 		PMD_INIT_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s",
 					(int)(dev->data->port_id),
 					(unsigned int)link.link_speed,
-			link.link_duplex == ETH_LINK_FULL_DUPLEX ?
+			link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex");
 	} else {
 		PMD_INIT_LOG(INFO, " Port %d: Link Down",
@@ -3225,13 +3225,13 @@ txgbe_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf)
 		tx_pause = 0;
 
 	if (rx_pause && tx_pause)
-		fc_conf->mode = RTE_FC_FULL;
+		fc_conf->mode = RTE_ETH_FC_FULL;
 	else if (rx_pause)
-		fc_conf->mode = RTE_FC_RX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
 	else if (tx_pause)
-		fc_conf->mode = RTE_FC_TX_PAUSE;
+		fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
 	else
-		fc_conf->mode = RTE_FC_NONE;
+		fc_conf->mode = RTE_ETH_FC_NONE;
 
 	return 0;
 }
@@ -3363,16 +3363,16 @@ txgbe_dev_rss_reta_update(struct rte_eth_dev *dev,
 		return -ENOTSUP;
 	}
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3404,16 +3404,16 @@ txgbe_dev_rss_reta_query(struct rte_eth_dev *dev,
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (reta_size != ETH_RSS_RETA_SIZE_128) {
+	if (reta_size != RTE_ETH_RSS_RETA_SIZE_128) {
 		PMD_DRV_LOG(ERR, "The size of hash lookup table configured "
 			"(%d) doesn't match the number hardware can supported "
-			"(%d)", reta_size, ETH_RSS_RETA_SIZE_128);
+			"(%d)", reta_size, RTE_ETH_RSS_RETA_SIZE_128);
 		return -EINVAL;
 	}
 
 	for (i = 0; i < reta_size; i += 4) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		mask = (uint8_t)RS64(reta_conf[idx].mask, shift, 0xF);
 		if (!mask)
 			continue;
@@ -3593,12 +3593,12 @@ txgbe_uc_all_hash_table_set(struct rte_eth_dev *dev, uint8_t on)
 		return -ENOTSUP;
 
 	if (on) {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = ~0;
 			wr32(hw, TXGBE_UCADDRTBL(i), ~0);
 		}
 	} else {
-		for (i = 0; i < ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
+		for (i = 0; i < RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY; i++) {
 			uta_info->uta_shadow[i] = 0;
 			wr32(hw, TXGBE_UCADDRTBL(i), 0);
 		}
@@ -3622,15 +3622,15 @@ txgbe_convert_vm_rx_mask_to_val(uint16_t rx_mask, uint32_t orig_val)
 {
 	uint32_t new_val = orig_val;
 
-	if (rx_mask & ETH_VMDQ_ACCEPT_UNTAG)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_UNTAG)
 		new_val |= TXGBE_POOLETHCTL_UTA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_MC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_MC)
 		new_val |= TXGBE_POOLETHCTL_MCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_HASH_UC)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_HASH_UC)
 		new_val |= TXGBE_POOLETHCTL_UCHA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_BROADCAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_BROADCAST)
 		new_val |= TXGBE_POOLETHCTL_BCA;
-	if (rx_mask & ETH_VMDQ_ACCEPT_MULTICAST)
+	if (rx_mask & RTE_ETH_VMDQ_ACCEPT_MULTICAST)
 		new_val |= TXGBE_POOLETHCTL_MCP;
 
 	return new_val;
@@ -4281,15 +4281,15 @@ txgbe_start_timecounters(struct rte_eth_dev *dev)
 	rte_eth_linkstatus_get(dev, &link);
 
 	switch (link.link_speed) {
-	case ETH_SPEED_NUM_100M:
+	case RTE_ETH_SPEED_NUM_100M:
 		incval = TXGBE_INCVAL_100;
 		shift = TXGBE_INCVAL_SHIFT_100;
 		break;
-	case ETH_SPEED_NUM_1G:
+	case RTE_ETH_SPEED_NUM_1G:
 		incval = TXGBE_INCVAL_1GB;
 		shift = TXGBE_INCVAL_SHIFT_1GB;
 		break;
-	case ETH_SPEED_NUM_10G:
+	case RTE_ETH_SPEED_NUM_10G:
 	default:
 		incval = TXGBE_INCVAL_10GB;
 		shift = TXGBE_INCVAL_SHIFT_10GB;
@@ -4645,7 +4645,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	uint8_t nb_tcs;
 	uint8_t i, j;
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_DCB_FLAG)
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_DCB_FLAG)
 		dcb_info->nb_tcs = dcb_config->num_tcs.pg_tcs;
 	else
 		dcb_info->nb_tcs = 1;
@@ -4656,7 +4656,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	if (dcb_config->vt_mode) { /* vt is enabled */
 		struct rte_eth_vmdq_dcb_conf *vmdq_rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = vmdq_rx_conf->dcb_tc[i];
 		if (RTE_ETH_DEV_SRIOV(dev).active > 0) {
 			for (j = 0; j < nb_tcs; j++) {
@@ -4680,9 +4680,9 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 	} else { /* vt is disabled */
 		struct rte_eth_dcb_rx_conf *rx_conf =
 				&dev->data->dev_conf.rx_adv_conf.dcb_rx_conf;
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			dcb_info->prio_tc[i] = rx_conf->dcb_tc[i];
-		if (dcb_info->nb_tcs == ETH_4_TCS) {
+		if (dcb_info->nb_tcs == RTE_ETH_4_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 32;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4695,7 +4695,7 @@ txgbe_dev_get_dcb_info(struct rte_eth_dev *dev,
 			dcb_info->tc_queue.tc_txq[0][1].nb_queue = 32;
 			dcb_info->tc_queue.tc_txq[0][2].nb_queue = 16;
 			dcb_info->tc_queue.tc_txq[0][3].nb_queue = 16;
-		} else if (dcb_info->nb_tcs == ETH_8_TCS) {
+		} else if (dcb_info->nb_tcs == RTE_ETH_8_TCS) {
 			for (i = 0; i < dcb_info->nb_tcs; i++) {
 				dcb_info->tc_queue.tc_rxq[0][i].base = i * 16;
 				dcb_info->tc_queue.tc_rxq[0][i].nb_queue = 16;
@@ -4925,7 +4925,7 @@ txgbe_dev_l2_tunnel_filter_add(struct rte_eth_dev *dev,
 	}
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_add(dev, l2_tunnel);
 		break;
 	default:
@@ -4956,7 +4956,7 @@ txgbe_dev_l2_tunnel_filter_del(struct rte_eth_dev *dev,
 		return ret;
 
 	switch (l2_tunnel->l2_tunnel_type) {
-	case RTE_L2_TUNNEL_TYPE_E_TAG:
+	case RTE_ETH_L2_TUNNEL_TYPE_E_TAG:
 		ret = txgbe_e_tag_filter_del(dev, l2_tunnel);
 		break;
 	default:
@@ -4996,7 +4996,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5004,7 +5004,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Geneve port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5012,7 +5012,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add Teredo port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5020,7 +5020,7 @@ txgbe_dev_udp_tunnel_port_add(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, udp_tunnel->udp_port);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		if (udp_tunnel->udp_port == 0) {
 			PMD_DRV_LOG(ERR, "Add VxLAN port 0 is not allowed.");
 			ret = -EINVAL;
@@ -5052,7 +5052,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		return -EINVAL;
 
 	switch (udp_tunnel->prot_type) {
-	case RTE_TUNNEL_TYPE_VXLAN:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5062,7 +5062,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_VXLANPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_GENEVE:
+	case RTE_ETH_TUNNEL_TYPE_GENEVE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_GENEVEPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5072,7 +5072,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_GENEVEPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_TEREDO:
+	case RTE_ETH_TUNNEL_TYPE_TEREDO:
 		cur_port = (uint16_t)rd32(hw, TXGBE_TEREDOPORT);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
@@ -5082,7 +5082,7 @@ txgbe_dev_udp_tunnel_port_del(struct rte_eth_dev *dev,
 		}
 		wr32(hw, TXGBE_TEREDOPORT, 0);
 		break;
-	case RTE_TUNNEL_TYPE_VXLAN_GPE:
+	case RTE_ETH_TUNNEL_TYPE_VXLAN_GPE:
 		cur_port = (uint16_t)rd32(hw, TXGBE_VXLANPORTGPE);
 		if (cur_port != udp_tunnel->udp_port) {
 			PMD_DRV_LOG(ERR, "Port %u does not exist.",
diff --git a/drivers/net/txgbe/txgbe_ethdev.h b/drivers/net/txgbe/txgbe_ethdev.h
index 3021933965c8..75a9e2580e27 100644
--- a/drivers/net/txgbe/txgbe_ethdev.h
+++ b/drivers/net/txgbe/txgbe_ethdev.h
@@ -56,15 +56,15 @@
 #define TXGBE_5TUPLE_MIN_PRI            1
 
 #define TXGBE_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
 
 #define TXGBE_MISC_VEC_ID               RTE_INTR_VEC_ZERO_OFFSET
 #define TXGBE_RX_VEC_START              RTE_INTR_VEC_RXTX_OFFSET
diff --git a/drivers/net/txgbe/txgbe_ethdev_vf.c b/drivers/net/txgbe/txgbe_ethdev_vf.c
index 18ed94bd277b..05773cb20786 100644
--- a/drivers/net/txgbe/txgbe_ethdev_vf.c
+++ b/drivers/net/txgbe/txgbe_ethdev_vf.c
@@ -491,14 +491,14 @@ txgbevf_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = TXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
-	dev_info->max_vmdq_pools = ETH_64_POOLS;
+	dev_info->max_vmdq_pools = RTE_ETH_64_POOLS;
 	dev_info->rx_queue_offload_capa = txgbe_get_rx_queue_offloads(dev);
 	dev_info->rx_offload_capa = (txgbe_get_rx_port_offloads(dev) |
 				     dev_info->rx_queue_offload_capa);
 	dev_info->tx_queue_offload_capa = txgbe_get_tx_queue_offloads(dev);
 	dev_info->tx_offload_capa = txgbe_get_tx_port_offloads(dev);
 	dev_info->hash_key_size = TXGBE_HKEY_MAX_INDEX * sizeof(uint32_t);
-	dev_info->reta_size = ETH_RSS_RETA_SIZE_128;
+	dev_info->reta_size = RTE_ETH_RSS_RETA_SIZE_128;
 	dev_info->flow_type_rss_offloads = TXGBE_RSS_OFFLOAD_ALL;
 
 	dev_info->default_rxconf = (struct rte_eth_rxconf) {
@@ -579,22 +579,22 @@ txgbevf_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "Configured Virtual Function port id: %d",
 		     dev->data->port_id);
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	/*
 	 * VF has no ability to enable/disable HW CRC
 	 * Keep the persistent behavior the same as Host PF
 	 */
 #ifndef RTE_LIBRTE_TXGBE_PF_DISABLE_STRIP_CRC
-	if (conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_INIT_LOG(NOTICE, "VF can't disable HW CRC Strip");
-		conf->rxmode.offloads &= ~DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #else
-	if (!(conf->rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)) {
+	if (!(conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)) {
 		PMD_INIT_LOG(NOTICE, "VF can't enable HW CRC Strip");
-		conf->rxmode.offloads |= DEV_RX_OFFLOAD_KEEP_CRC;
+		conf->rxmode.offloads |= RTE_ETH_RX_OFFLOAD_KEEP_CRC;
 	}
 #endif
 
@@ -652,8 +652,8 @@ txgbevf_dev_start(struct rte_eth_dev *dev)
 	txgbevf_set_vfta_all(dev, 1);
 
 	/* Set HW strip */
-	mask = ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK |
-		ETH_VLAN_EXTEND_MASK;
+	mask = RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK |
+		RTE_ETH_VLAN_EXTEND_MASK;
 	err = txgbevf_vlan_offload_config(dev, mask);
 	if (err) {
 		PMD_INIT_LOG(ERR, "Unable to set VLAN offload (%d)", err);
@@ -896,10 +896,10 @@ txgbevf_vlan_offload_config(struct rte_eth_dev *dev, int mask)
 	int on = 0;
 
 	/* VF function only support hw strip feature, others are not support */
-	if (mask & ETH_VLAN_STRIP_MASK) {
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
 		for (i = 0; i < dev->data->nb_rx_queues; i++) {
 			rxq = dev->data->rx_queues[i];
-			on = !!(rxq->offloads &	DEV_RX_OFFLOAD_VLAN_STRIP);
+			on = !!(rxq->offloads &	RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 			txgbevf_vlan_strip_queue_set(dev, i, on);
 		}
 	}
diff --git a/drivers/net/txgbe/txgbe_fdir.c b/drivers/net/txgbe/txgbe_fdir.c
index 8abb86228608..e303d87176ed 100644
--- a/drivers/net/txgbe/txgbe_fdir.c
+++ b/drivers/net/txgbe/txgbe_fdir.c
@@ -102,22 +102,22 @@ txgbe_fdir_enable(struct txgbe_hw *hw, uint32_t fdirctrl)
  * flexbytes matching field, and drop queue (only for perfect matching mode).
  */
 static inline int
-configure_fdir_flags(const struct rte_fdir_conf *conf,
+configure_fdir_flags(const struct rte_eth_fdir_conf *conf,
 		     uint32_t *fdirctrl, uint32_t *flex)
 {
 	*fdirctrl = 0;
 	*flex = 0;
 
 	switch (conf->pballoc) {
-	case RTE_FDIR_PBALLOC_64K:
+	case RTE_ETH_FDIR_PBALLOC_64K:
 		/* 8k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_64K;
 		break;
-	case RTE_FDIR_PBALLOC_128K:
+	case RTE_ETH_FDIR_PBALLOC_128K:
 		/* 16k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_128K;
 		break;
-	case RTE_FDIR_PBALLOC_256K:
+	case RTE_ETH_FDIR_PBALLOC_256K:
 		/* 32k - 1 signature filters */
 		*fdirctrl |= TXGBE_FDIRCTL_BUF_256K;
 		break;
@@ -521,15 +521,15 @@ txgbe_atr_compute_hash(struct txgbe_atr_input *atr_input,
 
 static uint32_t
 atr_compute_perfect_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= PERFECT_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= PERFECT_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= PERFECT_BUCKET_64KB_HASH_MASK;
@@ -564,15 +564,15 @@ txgbe_fdir_check_cmd_complete(struct txgbe_hw *hw, uint32_t *fdircmd)
  */
 static uint32_t
 atr_compute_signature_hash(struct txgbe_atr_input *input,
-		enum rte_fdir_pballoc_type pballoc)
+		enum rte_eth_fdir_pballoc_type pballoc)
 {
 	uint32_t bucket_hash, sig_hash;
 
 	bucket_hash = txgbe_atr_compute_hash(input,
 				TXGBE_ATR_BUCKET_HASH_KEY);
-	if (pballoc == RTE_FDIR_PBALLOC_256K)
+	if (pballoc == RTE_ETH_FDIR_PBALLOC_256K)
 		bucket_hash &= SIG_BUCKET_256KB_HASH_MASK;
-	else if (pballoc == RTE_FDIR_PBALLOC_128K)
+	else if (pballoc == RTE_ETH_FDIR_PBALLOC_128K)
 		bucket_hash &= SIG_BUCKET_128KB_HASH_MASK;
 	else
 		bucket_hash &= SIG_BUCKET_64KB_HASH_MASK;
diff --git a/drivers/net/txgbe/txgbe_flow.c b/drivers/net/txgbe/txgbe_flow.c
index eae400b14176..6d7fd1842843 100644
--- a/drivers/net/txgbe/txgbe_flow.c
+++ b/drivers/net/txgbe/txgbe_flow.c
@@ -1215,7 +1215,7 @@ cons_parse_l2_tn_filter(struct rte_eth_dev *dev,
 		return -rte_errno;
 	}
 
-	filter->l2_tunnel_type = RTE_L2_TUNNEL_TYPE_E_TAG;
+	filter->l2_tunnel_type = RTE_ETH_L2_TUNNEL_TYPE_E_TAG;
 	/**
 	 * grp and e_cid_base are bit fields and only use 14 bits.
 	 * e-tag id is taken as little endian by HW.
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index ccd747973ba2..445733f3ba46 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -372,7 +372,7 @@ txgbe_crypto_create_session(void *device,
 	aead_xform = &conf->crypto_xform->aead;
 
 	if (conf->ipsec.direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
-		if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) {
+		if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
@@ -380,7 +380,7 @@ txgbe_crypto_create_session(void *device,
 			return -ENOTSUP;
 		}
 	} else {
-		if (dev_conf->txmode.offloads & DEV_TX_OFFLOAD_SECURITY) {
+		if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 			ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
 		} else {
 			PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
@@ -611,11 +611,11 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	tx_offloads = dev->data->dev_conf.txmode.offloads;
 
 	/* sanity checks */
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		PMD_DRV_LOG(ERR, "RSC and IPsec not supported");
 		return -1;
 	}
-	if (rx_offloads & DEV_RX_OFFLOAD_KEEP_CRC) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) {
 		PMD_DRV_LOG(ERR, "HW CRC strip needs to be enabled for IPsec");
 		return -1;
 	}
@@ -634,7 +634,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 	reg |= TXGBE_SECRXCTL_CRCSTRIP;
 	wr32(hw, TXGBE_SECRXCTL, reg);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_SECURITY) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
 		wr32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA, 0);
 		reg = rd32m(hw, TXGBE_SECRXCTL, TXGBE_SECRXCTL_ODSA);
 		if (reg != 0) {
@@ -642,7 +642,7 @@ txgbe_crypto_enable_ipsec(struct rte_eth_dev *dev)
 			return -1;
 		}
 	}
-	if (tx_offloads & DEV_TX_OFFLOAD_SECURITY) {
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
 		wr32(hw, TXGBE_SECTXCTL, TXGBE_SECTXCTL_STFWD);
 		reg = rd32(hw, TXGBE_SECTXCTL);
 		if (reg != TXGBE_SECTXCTL_STFWD) {
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 494d779a3c9d..44f6f103edd2 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -103,15 +103,15 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
 	memset(uta_info, 0, sizeof(struct txgbe_uta_info));
 	hw->mac.mc_filter_type = 0;
 
-	if (vf_num >= ETH_32_POOLS) {
+	if (vf_num >= RTE_ETH_32_POOLS) {
 		nb_queue = 2;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_64_POOLS;
-	} else if (vf_num >= ETH_16_POOLS) {
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_64_POOLS;
+	} else if (vf_num >= RTE_ETH_16_POOLS) {
 		nb_queue = 4;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_32_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_32_POOLS;
 	} else {
 		nb_queue = 8;
-		RTE_ETH_DEV_SRIOV(eth_dev).active = ETH_16_POOLS;
+		RTE_ETH_DEV_SRIOV(eth_dev).active = RTE_ETH_16_POOLS;
 	}
 
 	RTE_ETH_DEV_SRIOV(eth_dev).nb_q_per_pool = nb_queue;
@@ -258,13 +258,13 @@ int txgbe_pf_host_configure(struct rte_eth_dev *eth_dev)
 	gcr_ext &= ~TXGBE_PORTCTL_NUMVT_MASK;
 
 	switch (RTE_ETH_DEV_SRIOV(eth_dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_64;
 		break;
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_32;
 		break;
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		gcr_ext |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	}
@@ -613,29 +613,29 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
 	/* Notify VF of number of DCB traffic classes */
 	eth_conf = &eth_dev->data->dev_conf;
 	switch (eth_conf->txmode.mq_mode) {
-	case ETH_MQ_TX_NONE:
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_NONE:
+	case RTE_ETH_MQ_TX_DCB:
 		PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
 			", but its tx mode = %d\n", vf,
 			eth_conf->txmode.mq_mode);
 		return -1;
 
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		vmdq_dcb_tx_conf = &eth_conf->tx_adv_conf.vmdq_dcb_tx_conf;
 		switch (vmdq_dcb_tx_conf->nb_queue_pools) {
-		case ETH_16_POOLS:
-			num_tcs = ETH_8_TCS;
+		case RTE_ETH_16_POOLS:
+			num_tcs = RTE_ETH_8_TCS;
 			break;
-		case ETH_32_POOLS:
-			num_tcs = ETH_4_TCS;
+		case RTE_ETH_32_POOLS:
+			num_tcs = RTE_ETH_4_TCS;
 			break;
 		default:
 			return -1;
 		}
 		break;
 
-	/* ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
-	case ETH_MQ_TX_VMDQ_ONLY:
+	/* RTE_ETH_MQ_TX_VMDQ_ONLY,  DCB not enabled */
+	case RTE_ETH_MQ_TX_VMDQ_ONLY:
 		hw = TXGBE_DEV_HW(eth_dev);
 		vmvir = rd32(hw, TXGBE_POOLTAG(vf));
 		vlana = vmvir & TXGBE_POOLTAG_ACT_MASK;
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 1a261287d1bd..c302d49af728 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -1939,7 +1939,7 @@ txgbe_recv_pkts_lro_bulk_alloc(void *rx_queue, struct rte_mbuf **rx_pkts,
 uint64_t
 txgbe_get_rx_queue_offloads(struct rte_eth_dev *dev __rte_unused)
 {
-	return DEV_RX_OFFLOAD_VLAN_STRIP;
+	return RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 }
 
 uint64_t
@@ -1949,35 +1949,35 @@ txgbe_get_rx_port_offloads(struct rte_eth_dev *dev)
 	struct txgbe_hw *hw = TXGBE_DEV_HW(dev);
 	struct rte_eth_dev_sriov *sriov = &RTE_ETH_DEV_SRIOV(dev);
 
-	offloads = DEV_RX_OFFLOAD_IPV4_CKSUM  |
-		   DEV_RX_OFFLOAD_UDP_CKSUM   |
-		   DEV_RX_OFFLOAD_TCP_CKSUM   |
-		   DEV_RX_OFFLOAD_KEEP_CRC    |
-		   DEV_RX_OFFLOAD_JUMBO_FRAME |
-		   DEV_RX_OFFLOAD_VLAN_FILTER |
-		   DEV_RX_OFFLOAD_RSS_HASH |
-		   DEV_RX_OFFLOAD_SCATTER;
+	offloads = RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  |
+		   RTE_ETH_RX_OFFLOAD_UDP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_TCP_CKSUM   |
+		   RTE_ETH_RX_OFFLOAD_KEEP_CRC    |
+		   RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |
+		   RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+		   RTE_ETH_RX_OFFLOAD_RSS_HASH |
+		   RTE_ETH_RX_OFFLOAD_SCATTER;
 
 	if (!txgbe_is_vf(dev))
-		offloads |= (DEV_RX_OFFLOAD_VLAN_FILTER |
-			     DEV_RX_OFFLOAD_QINQ_STRIP |
-			     DEV_RX_OFFLOAD_VLAN_EXTEND);
+		offloads |= (RTE_ETH_RX_OFFLOAD_VLAN_FILTER |
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP |
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 
 	/*
 	 * RSC is only supported by PF devices in a non-SR-IOV
 	 * mode.
 	 */
 	if (hw->mac.type == txgbe_mac_raptor && !sriov->active)
-		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+		offloads |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
 	if (hw->mac.type == txgbe_mac_raptor)
-		offloads |= DEV_RX_OFFLOAD_MACSEC_STRIP;
+		offloads |= RTE_ETH_RX_OFFLOAD_MACSEC_STRIP;
 
-	offloads |= DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM;
+	offloads |= RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		offloads |= DEV_RX_OFFLOAD_SECURITY;
+		offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 #endif
 
 	return offloads;
@@ -2202,32 +2202,32 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
 	uint64_t tx_offload_capa;
 
 	tx_offload_capa =
-		DEV_TX_OFFLOAD_VLAN_INSERT |
-		DEV_TX_OFFLOAD_IPV4_CKSUM  |
-		DEV_TX_OFFLOAD_UDP_CKSUM   |
-		DEV_TX_OFFLOAD_TCP_CKSUM   |
-		DEV_TX_OFFLOAD_SCTP_CKSUM  |
-		DEV_TX_OFFLOAD_TCP_TSO     |
-		DEV_TX_OFFLOAD_UDP_TSO	   |
-		DEV_TX_OFFLOAD_UDP_TNL_TSO	|
-		DEV_TX_OFFLOAD_IP_TNL_TSO	|
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO	|
-		DEV_TX_OFFLOAD_GRE_TNL_TSO	|
-		DEV_TX_OFFLOAD_IPIP_TNL_TSO	|
-		DEV_TX_OFFLOAD_GENEVE_TNL_TSO	|
-		DEV_TX_OFFLOAD_MULTI_SEGS;
+		RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+		RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+		RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+		RTE_ETH_TX_OFFLOAD_TCP_TSO     |
+		RTE_ETH_TX_OFFLOAD_UDP_TSO	   |
+		RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO	|
+		RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	if (!txgbe_is_vf(dev))
-		tx_offload_capa |= DEV_TX_OFFLOAD_QINQ_INSERT;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_MACSEC_INSERT;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
 
-	tx_offload_capa |= DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM |
-			   DEV_TX_OFFLOAD_OUTER_UDP_CKSUM;
+	tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
 
 #ifdef RTE_LIB_SECURITY
 	if (dev->security_ctx)
-		tx_offload_capa |= DEV_TX_OFFLOAD_SECURITY;
+		tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
 #endif
 	return tx_offload_capa;
 }
@@ -2329,7 +2329,7 @@ txgbe_dev_tx_queue_setup(struct rte_eth_dev *dev,
 	txq->tx_deferred_start = tx_conf->tx_deferred_start;
 #ifdef RTE_LIB_SECURITY
 	txq->using_ipsec = !!(dev->data->dev_conf.txmode.offloads &
-			DEV_TX_OFFLOAD_SECURITY);
+			RTE_ETH_TX_OFFLOAD_SECURITY);
 #endif
 
 	/* Modification to set tail pointer for virtual function
@@ -2579,7 +2579,7 @@ txgbe_dev_rx_queue_setup(struct rte_eth_dev *dev,
 	rxq->reg_idx = (uint16_t)((RTE_ETH_DEV_SRIOV(dev).active == 0) ?
 		queue_idx : RTE_ETH_DEV_SRIOV(dev).def_pool_q_idx + queue_idx);
 	rxq->port_id = dev->data->port_id;
-	if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		rxq->crc_len = RTE_ETHER_CRC_LEN;
 	else
 		rxq->crc_len = 0;
@@ -2880,20 +2880,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		mrqc &= ~TXGBE_VFPLCFG_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_VFPLCFG_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2910,20 +2910,20 @@ txgbe_dev_rss_hash_update(struct rte_eth_dev *dev,
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		mrqc &= ~TXGBE_RACTL_RSSMASK;
-		if (rss_hf & ETH_RSS_IPV4)
+		if (rss_hf & RTE_ETH_RSS_IPV4)
 			mrqc |= TXGBE_RACTL_RSSIPV4;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 			mrqc |= TXGBE_RACTL_RSSIPV4TCP;
-		if (rss_hf & ETH_RSS_IPV6 ||
-		    rss_hf & ETH_RSS_IPV6_EX)
+		if (rss_hf & RTE_ETH_RSS_IPV6 ||
+		    rss_hf & RTE_ETH_RSS_IPV6_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP ||
-		    rss_hf & ETH_RSS_IPV6_TCP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_TCP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6TCP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 			mrqc |= TXGBE_RACTL_RSSIPV4UDP;
-		if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP ||
-		    rss_hf & ETH_RSS_IPV6_UDP_EX)
+		if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP ||
+		    rss_hf & RTE_ETH_RSS_IPV6_UDP_EX)
 			mrqc |= TXGBE_RACTL_RSSIPV6UDP;
 
 		if (rss_hf)
@@ -2964,39 +2964,39 @@ txgbe_dev_rss_hash_conf_get(struct rte_eth_dev *dev,
 	if (hw->mac.type == txgbe_mac_raptor_vf) {
 		mrqc = rd32(hw, TXGBE_VFPLCFG);
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_VFPLCFG_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_VFPLCFG_RSSENA))
 			rss_hf = 0;
 	} else {
 		mrqc = rd32(hw, TXGBE_RACTL);
 		if (mrqc & TXGBE_RACTL_RSSIPV4)
-			rss_hf |= ETH_RSS_IPV4;
+			rss_hf |= RTE_ETH_RSS_IPV4;
 		if (mrqc & TXGBE_RACTL_RSSIPV4TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6)
-			rss_hf |= ETH_RSS_IPV6 |
-				  ETH_RSS_IPV6_EX;
+			rss_hf |= RTE_ETH_RSS_IPV6 |
+				  RTE_ETH_RSS_IPV6_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV6TCP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP |
-				  ETH_RSS_IPV6_TCP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP |
+				  RTE_ETH_RSS_IPV6_TCP_EX;
 		if (mrqc & TXGBE_RACTL_RSSIPV4UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP;
 		if (mrqc & TXGBE_RACTL_RSSIPV6UDP)
-			rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP |
-				  ETH_RSS_IPV6_UDP_EX;
+			rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP |
+				  RTE_ETH_RSS_IPV6_UDP_EX;
 		if (!(mrqc & TXGBE_RACTL_RSSENA))
 			rss_hf = 0;
 	}
@@ -3026,7 +3026,7 @@ txgbe_rss_configure(struct rte_eth_dev *dev)
 	 */
 	if (adapter->rss_reta_updated == 0) {
 		reta = 0;
-		for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+		for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 			if (j == dev->data->nb_rx_queues)
 				j = 0;
 			reta = (reta >> 8) | LS32(j, 24, 0xFF);
@@ -3063,12 +3063,12 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	cfg = &dev->data->dev_conf.rx_adv_conf.vmdq_dcb_conf;
 	num_pools = cfg->nb_queue_pools;
 	/* Check we have a valid number of pools */
-	if (num_pools != ETH_16_POOLS && num_pools != ETH_32_POOLS) {
+	if (num_pools != RTE_ETH_16_POOLS && num_pools != RTE_ETH_32_POOLS) {
 		txgbe_rss_disable(dev);
 		return;
 	}
 	/* 16 pools -> 8 traffic classes, 32 pools -> 4 traffic classes */
-	nb_tcs = (uint8_t)(ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
+	nb_tcs = (uint8_t)(RTE_ETH_VMDQ_DCB_NUM_QUEUES / (int)num_pools);
 
 	/*
 	 * split rx buffer up into sections, each for 1 traffic class
@@ -3083,7 +3083,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 	/* zero alloc all unused TCs */
-	for (i = nb_tcs; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = nb_tcs; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		uint32_t rxpbsize = rd32(hw, TXGBE_PBRXSIZE(i));
 
 		rxpbsize &= (~(0x3FF << 10));
@@ -3091,7 +3091,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 	}
 
-	if (num_pools == ETH_16_POOLS) {
+	if (num_pools == RTE_ETH_16_POOLS) {
 		mrqc = TXGBE_PORTCTL_NUMTC_8;
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 	} else {
@@ -3110,7 +3110,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 	wr32(hw, TXGBE_POOLCTL, vt_ctl);
 
 	queue_mapping = 0;
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 		/*
 		 * mapping is done with 3 bits per priority,
 		 * so shift by i*3 each time
@@ -3131,7 +3131,7 @@ txgbe_vmdq_dcb_configure(struct rte_eth_dev *dev)
 		wr32(hw, TXGBE_VLANTBL(i), 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_POOLRXENA(0),
-			num_pools == ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+			num_pools == RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	wr32(hw, TXGBE_ETHADDRIDX, 0);
 	wr32(hw, TXGBE_ETHADDRASSL, 0xFFFFFFFF);
@@ -3201,7 +3201,7 @@ txgbe_vmdq_dcb_hw_tx_config(struct rte_eth_dev *dev,
 	/*PF VF Transmit Enable*/
 	wr32(hw, TXGBE_POOLTXENA(0),
 		vmdq_tx_conf->nb_queue_pools ==
-				ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
+				RTE_ETH_16_POOLS ? 0xFFFF : 0xFFFFFFFF);
 
 	/*Configure general DCB TX parameters*/
 	txgbe_dcb_tx_hw_config(dev, dcb_config);
@@ -3217,12 +3217,12 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_rx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_rx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3232,7 +3232,7 @@ txgbe_vmdq_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3250,12 +3250,12 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	uint8_t i, j;
 
 	/* convert rte_eth_conf.rx_adv_conf to struct txgbe_dcb_config */
-	if (vmdq_tx_conf->nb_queue_pools == ETH_16_POOLS) {
-		dcb_config->num_tcs.pg_tcs = ETH_8_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_8_TCS;
+	if (vmdq_tx_conf->nb_queue_pools == RTE_ETH_16_POOLS) {
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_8_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_8_TCS;
 	} else {
-		dcb_config->num_tcs.pg_tcs = ETH_4_TCS;
-		dcb_config->num_tcs.pfc_tcs = ETH_4_TCS;
+		dcb_config->num_tcs.pg_tcs = RTE_ETH_4_TCS;
+		dcb_config->num_tcs.pfc_tcs = RTE_ETH_4_TCS;
 	}
 
 	/* Initialize User Priority to Traffic Class mapping */
@@ -3265,7 +3265,7 @@ txgbe_dcb_vt_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = vmdq_tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3292,7 +3292,7 @@ txgbe_dcb_rx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = rx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_RX_CONFIG].up_to_tc_bitmap |=
@@ -3319,7 +3319,7 @@ txgbe_dcb_tx_config(struct rte_eth_dev *dev,
 	}
 
 	/* User Priority to Traffic Class mapping */
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		j = tx_conf->dcb_tc[i];
 		tc = &dcb_config->tc_config[j];
 		tc->path[TXGBE_DCB_TX_CONFIG].up_to_tc_bitmap |=
@@ -3455,7 +3455,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	struct txgbe_bw_conf *bw_conf = TXGBE_DEV_BW_CONF(dev);
 
 	switch (dev->data->dev_conf.rxmode.mq_mode) {
-	case ETH_MQ_RX_VMDQ_DCB:
+	case RTE_ETH_MQ_RX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/*
@@ -3466,8 +3466,8 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		/*Configure general VMDQ and DCB RX parameters*/
 		txgbe_vmdq_dcb_configure(dev);
 		break;
-	case ETH_MQ_RX_DCB:
-	case ETH_MQ_RX_DCB_RSS:
+	case RTE_ETH_MQ_RX_DCB:
+	case RTE_ETH_MQ_RX_DCB_RSS:
 		dcb_config->vt_mode = false;
 		config_dcb_rx = DCB_RX_CONFIG;
 		/* Get dcb TX configuration parameters from rte_eth_conf */
@@ -3480,7 +3480,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		break;
 	}
 	switch (dev->data->dev_conf.txmode.mq_mode) {
-	case ETH_MQ_TX_VMDQ_DCB:
+	case RTE_ETH_MQ_TX_VMDQ_DCB:
 		dcb_config->vt_mode = true;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB and VT TX configuration parameters
@@ -3491,7 +3491,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 		txgbe_vmdq_dcb_hw_tx_config(dev, dcb_config);
 		break;
 
-	case ETH_MQ_TX_DCB:
+	case RTE_ETH_MQ_TX_DCB:
 		dcb_config->vt_mode = false;
 		config_dcb_tx = DCB_TX_CONFIG;
 		/* get DCB TX configuration parameters from rte_eth_conf */
@@ -3507,15 +3507,15 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	nb_tcs = dcb_config->num_tcs.pfc_tcs;
 	/* Unpack map */
 	txgbe_dcb_unpack_map_cee(dcb_config, TXGBE_DCB_RX_CONFIG, map);
-	if (nb_tcs == ETH_4_TCS) {
+	if (nb_tcs == RTE_ETH_4_TCS) {
 		/* Avoid un-configured priority mapping to TC0 */
 		uint8_t j = 4;
 		uint8_t mask = 0xFF;
 
-		for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
+		for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES - 4; i++)
 			mask = (uint8_t)(mask & (~(1 << map[i])));
 		for (i = 0; mask && (i < TXGBE_DCB_TC_MAX); i++) {
-			if ((mask & 0x1) && j < ETH_DCB_NUM_USER_PRIORITIES)
+			if ((mask & 0x1) && j < RTE_ETH_DCB_NUM_USER_PRIORITIES)
 				map[j++] = i;
 			mask >>= 1;
 		}
@@ -3556,7 +3556,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBRXSIZE(i), rxpbsize);
 
 		/* zero alloc all unused TCs */
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++)
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++)
 			wr32(hw, TXGBE_PBRXSIZE(i), 0);
 	}
 	if (config_dcb_tx) {
@@ -3572,7 +3572,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			wr32(hw, TXGBE_PBTXDMATH(i), txpbthresh);
 		}
 		/* Clear unused TCs, if any, to zero buffer size*/
-		for (; i < ETH_DCB_NUM_USER_PRIORITIES; i++) {
+		for (; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 			wr32(hw, TXGBE_PBTXSIZE(i), 0);
 			wr32(hw, TXGBE_PBTXDMATH(i), 0);
 		}
@@ -3614,7 +3614,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 	txgbe_dcb_config_tc_stats_raptor(hw, dcb_config);
 
 	/* Check if the PFC is supported */
-	if (dev->data->dev_conf.dcb_capability_en & ETH_DCB_PFC_SUPPORT) {
+	if (dev->data->dev_conf.dcb_capability_en & RTE_ETH_DCB_PFC_SUPPORT) {
 		pbsize = (uint16_t)(rx_buffer_size / nb_tcs);
 		for (i = 0; i < nb_tcs; i++) {
 			/* If the TC count is 8,
@@ -3628,7 +3628,7 @@ txgbe_dcb_hw_configure(struct rte_eth_dev *dev,
 			tc->pfc = txgbe_dcb_pfc_enabled;
 		}
 		txgbe_dcb_unpack_pfc_cee(dcb_config, map, &pfc_en);
-		if (dcb_config->num_tcs.pfc_tcs == ETH_4_TCS)
+		if (dcb_config->num_tcs.pfc_tcs == RTE_ETH_4_TCS)
 			pfc_en &= 0x0F;
 		ret = txgbe_dcb_config_pfc(hw, pfc_en, map);
 	}
@@ -3699,12 +3699,12 @@ void txgbe_configure_dcb(struct rte_eth_dev *dev)
 	PMD_INIT_FUNC_TRACE();
 
 	/* check support mq_mode for DCB */
-	if (dev_conf->rxmode.mq_mode != ETH_MQ_RX_VMDQ_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB &&
-	    dev_conf->rxmode.mq_mode != ETH_MQ_RX_DCB_RSS)
+	if (dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_VMDQ_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB &&
+	    dev_conf->rxmode.mq_mode != RTE_ETH_MQ_RX_DCB_RSS)
 		return;
 
-	if (dev->data->nb_rx_queues > ETH_DCB_NUM_QUEUES)
+	if (dev->data->nb_rx_queues > RTE_ETH_DCB_NUM_QUEUES)
 		return;
 
 	/** Configure DCB hardware **/
@@ -3760,7 +3760,7 @@ txgbe_vmdq_rx_hw_configure(struct rte_eth_dev *dev)
 
 	/* pool enabling for receive - 64 */
 	wr32(hw, TXGBE_POOLRXENA(0), UINT32_MAX);
-	if (num_pools == ETH_64_POOLS)
+	if (num_pools == RTE_ETH_64_POOLS)
 		wr32(hw, TXGBE_POOLRXENA(1), UINT32_MAX);
 
 	/*
@@ -3884,11 +3884,11 @@ txgbe_config_vf_rss(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
@@ -3911,15 +3911,15 @@ txgbe_config_vf_default(struct rte_eth_dev *dev)
 	mrqc = rd32(hw, TXGBE_PORTCTL);
 	mrqc &= ~(TXGBE_PORTCTL_NUMTC_MASK | TXGBE_PORTCTL_NUMVT_MASK);
 	switch (RTE_ETH_DEV_SRIOV(dev).active) {
-	case ETH_64_POOLS:
+	case RTE_ETH_64_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_64;
 		break;
 
-	case ETH_32_POOLS:
+	case RTE_ETH_32_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_32;
 		break;
 
-	case ETH_16_POOLS:
+	case RTE_ETH_16_POOLS:
 		mrqc |= TXGBE_PORTCTL_NUMVT_16;
 		break;
 	default:
@@ -3942,21 +3942,21 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * any DCB/RSS w/o VMDq multi-queue setting
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_DCB_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_rss_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 
-		case ETH_MQ_RX_VMDQ_ONLY:
+		case RTE_ETH_MQ_RX_VMDQ_ONLY:
 			txgbe_vmdq_rx_hw_configure(dev);
 			break;
 
-		case ETH_MQ_RX_NONE:
+		case RTE_ETH_MQ_RX_NONE:
 		default:
 			/* if mq_mode is none, disable rss mode.*/
 			txgbe_rss_disable(dev);
@@ -3967,18 +3967,18 @@ txgbe_dev_mq_rx_configure(struct rte_eth_dev *dev)
 		 * Support RSS together with SRIOV.
 		 */
 		switch (dev->data->dev_conf.rxmode.mq_mode) {
-		case ETH_MQ_RX_RSS:
-		case ETH_MQ_RX_VMDQ_RSS:
+		case RTE_ETH_MQ_RX_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_RSS:
 			txgbe_config_vf_rss(dev);
 			break;
-		case ETH_MQ_RX_VMDQ_DCB:
-		case ETH_MQ_RX_DCB:
+		case RTE_ETH_MQ_RX_VMDQ_DCB:
+		case RTE_ETH_MQ_RX_DCB:
 		/* In SRIOV, the configuration is the same as VMDq case */
 			txgbe_vmdq_dcb_configure(dev);
 			break;
 		/* DCB/RSS together with SRIOV is not supported */
-		case ETH_MQ_RX_VMDQ_DCB_RSS:
-		case ETH_MQ_RX_DCB_RSS:
+		case RTE_ETH_MQ_RX_VMDQ_DCB_RSS:
+		case RTE_ETH_MQ_RX_DCB_RSS:
 			PMD_INIT_LOG(ERR,
 				"Could not support DCB/RSS with VMDq & SRIOV");
 			return -1;
@@ -4008,7 +4008,7 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV inactive scheme
 		 * any DCB w/o VMDq multi-queue setting
 		 */
-		if (dev->data->dev_conf.txmode.mq_mode == ETH_MQ_TX_VMDQ_ONLY)
+		if (dev->data->dev_conf.txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_ONLY)
 			txgbe_vmdq_tx_hw_configure(hw);
 		else
 			wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_NUMVT_MASK, 0);
@@ -4018,13 +4018,13 @@ txgbe_dev_mq_tx_configure(struct rte_eth_dev *dev)
 		 * SRIOV active scheme
 		 * FIXME if support DCB together with VMDq & SRIOV
 		 */
-		case ETH_64_POOLS:
+		case RTE_ETH_64_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_64;
 			break;
-		case ETH_32_POOLS:
+		case RTE_ETH_32_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_32;
 			break;
-		case ETH_16_POOLS:
+		case RTE_ETH_16_POOLS:
 			mtqc = TXGBE_PORTCTL_NUMVT_16;
 			break;
 		default:
@@ -4087,10 +4087,10 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* Sanity check */
 	dev->dev_ops->dev_infos_get(dev, &dev_info);
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TCP_LRO)
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		rsc_capable = true;
 
-	if (!rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if (!rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO is requested on HW that doesn't "
 				   "support it");
 		return -EINVAL;
@@ -4098,22 +4098,22 @@ txgbe_set_rsc(struct rte_eth_dev *dev)
 
 	/* RSC global configuration */
 
-	if ((rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC) &&
-	     (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO)) {
+	if ((rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC) &&
+	     (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)) {
 		PMD_INIT_LOG(CRIT, "LRO can't be enabled when HW CRC "
 				    "is disabled");
 		return -EINVAL;
 	}
 
 	rfctl = rd32(hw, TXGBE_PSRCTL);
-	if (rsc_capable && (rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (rsc_capable && (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		rfctl &= ~TXGBE_PSRCTL_RSCDIA;
 	else
 		rfctl |= TXGBE_PSRCTL_RSCDIA;
 	wr32(hw, TXGBE_PSRCTL, rfctl);
 
 	/* If LRO hasn't been requested - we are done here. */
-	if (!(rx_conf->offloads & DEV_RX_OFFLOAD_TCP_LRO))
+	if (!(rx_conf->offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO))
 		return 0;
 
 	/* Set PSRCTL.RSCACK bit */
@@ -4253,7 +4253,7 @@ txgbe_set_rx_function(struct rte_eth_dev *dev)
 		struct txgbe_rx_queue *rxq = dev->data->rx_queues[i];
 
 		rxq->using_ipsec = !!(dev->data->dev_conf.rxmode.offloads &
-				DEV_RX_OFFLOAD_SECURITY);
+				RTE_ETH_RX_OFFLOAD_SECURITY);
 	}
 #endif
 }
@@ -4296,7 +4296,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Configure CRC stripping, if any.
 	 */
 	hlreg0 = rd32(hw, TXGBE_SECRXCTL);
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 		hlreg0 &= ~TXGBE_SECRXCTL_CRCSTRIP;
 	else
 		hlreg0 |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4305,7 +4305,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	/*
 	 * Configure jumbo frame support, if any.
 	 */
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		wr32m(hw, TXGBE_FRMSZ, TXGBE_FRMSZ_MAX_MASK,
 			TXGBE_FRMSZ_MAX(rx_conf->max_rx_pkt_len));
 	} else {
@@ -4329,7 +4329,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rx_conf->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rx_conf->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Setup RX queues */
 	for (i = 0; i < dev->data->nb_rx_queues; i++) {
@@ -4339,7 +4339,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		 * Reset crc_len in case it was changed after queue setup by a
 		 * call to configure.
 		 */
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rxq->crc_len = RTE_ETHER_CRC_LEN;
 		else
 			rxq->crc_len = 0;
@@ -4376,11 +4376,11 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 		if (dev->data->dev_conf.rxmode.max_rx_pkt_len +
 					    2 * TXGBE_VLAN_TAG_SIZE > buf_size)
 			dev->data->scattered_rx = 1;
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rx_conf->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rx_conf->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_SCATTER)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
 		dev->data->scattered_rx = 1;
 
 	/*
@@ -4395,7 +4395,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 	 */
 	rxcsum = rd32(hw, TXGBE_PSRCTL);
 	rxcsum |= TXGBE_PSRCTL_PCSD;
-	if (rx_conf->offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		rxcsum |= TXGBE_PSRCTL_L4CSUM;
 	else
 		rxcsum &= ~TXGBE_PSRCTL_L4CSUM;
@@ -4404,7 +4404,7 @@ txgbe_dev_rx_init(struct rte_eth_dev *dev)
 
 	if (hw->mac.type == txgbe_mac_raptor) {
 		rdrxctl = rd32(hw, TXGBE_SECRXCTL);
-		if (rx_conf->offloads & DEV_RX_OFFLOAD_KEEP_CRC)
+		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_KEEP_CRC)
 			rdrxctl &= ~TXGBE_SECRXCTL_CRCSTRIP;
 		else
 			rdrxctl |= TXGBE_SECRXCTL_CRCSTRIP;
@@ -4527,8 +4527,8 @@ txgbe_dev_rxtx_start(struct rte_eth_dev *dev)
 		txgbe_setup_loopback_link_raptor(hw);
 
 #ifdef RTE_LIB_SECURITY
-	if ((dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SECURITY) ||
-	    (dev->data->dev_conf.txmode.offloads & DEV_TX_OFFLOAD_SECURITY)) {
+	if ((dev->data->dev_conf.rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) ||
+	    (dev->data->dev_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY)) {
 		ret = txgbe_crypto_enable_ipsec(dev);
 		if (ret != 0) {
 			PMD_DRV_LOG(ERR,
@@ -4836,7 +4836,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 	 * Assume no header split and no VLAN strip support
 	 * on any Rx queue first .
 	 */
-	rxmode->offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
+	rxmode->offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	/* Set PSR type for VF RSS according to max Rx queue */
 	psrtype = TXGBE_VFPLCFG_PSRL4HDR |
@@ -4888,7 +4888,7 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 		 */
 		wr32(hw, TXGBE_RXCFG(i), srrctl);
 
-		if (rxmode->offloads & DEV_RX_OFFLOAD_SCATTER ||
+		if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_SCATTER ||
 		    /* It adds dual VLAN length for supporting dual VLAN */
 		    (rxmode->max_rx_pkt_len +
 				2 * TXGBE_VLAN_TAG_SIZE) > buf_size) {
@@ -4897,8 +4897,8 @@ txgbevf_dev_rx_init(struct rte_eth_dev *dev)
 			dev->data->scattered_rx = 1;
 		}
 
-		if (rxq->offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-			rxmode->offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+		if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+			rxmode->offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 	}
 
 	/*
@@ -5069,7 +5069,7 @@ txgbe_config_rss_filter(struct rte_eth_dev *dev,
 	 * little-endian order.
 	 */
 	reta = 0;
-	for (i = 0, j = 0; i < ETH_RSS_RETA_SIZE_128; i++, j++) {
+	for (i = 0, j = 0; i < RTE_ETH_RSS_RETA_SIZE_128; i++, j++) {
 		if (j == conf->conf.queue_num)
 			j = 0;
 		reta = (reta >> 8) | LS32(conf->conf.queue[j], 24, 0xFF);
diff --git a/drivers/net/txgbe/txgbe_rxtx.h b/drivers/net/txgbe/txgbe_rxtx.h
index b96f58a3f848..27d4c842c0e7 100644
--- a/drivers/net/txgbe/txgbe_rxtx.h
+++ b/drivers/net/txgbe/txgbe_rxtx.h
@@ -309,7 +309,7 @@ struct txgbe_rx_queue {
 	uint8_t             rx_deferred_start; /**< not in global dev start. */
 	/** flags to set in mbuf when a vlan is detected. */
 	uint64_t            vlan_flags;
-	uint64_t	    offloads; /**< Rx offloads with DEV_RX_OFFLOAD_* */
+	uint64_t	    offloads; /**< Rx offloads with RTE_ETH_RX_OFFLOAD_* */
 	/** need to alloc dummy mbuf, for wraparound when scanning hw ring */
 	struct rte_mbuf fake_mbuf;
 	/** hold packets to return to application */
@@ -392,7 +392,7 @@ struct txgbe_tx_queue {
 	uint8_t             pthresh;       /**< Prefetch threshold register. */
 	uint8_t             hthresh;       /**< Host threshold register. */
 	uint8_t             wthresh;       /**< Write-back threshold reg. */
-	uint64_t            offloads; /* Tx offload flags of DEV_TX_OFFLOAD_* */
+	uint64_t            offloads; /* Tx offload flags of RTE_ETH_TX_OFFLOAD_* */
 	uint32_t            ctx_curr;      /**< Hardware context states. */
 	/** Hardware context0 history. */
 	struct txgbe_ctx_info ctx_cache[TXGBE_CTX_NUM];
diff --git a/drivers/net/txgbe/txgbe_tm.c b/drivers/net/txgbe/txgbe_tm.c
index 3abe3959eb1a..3171be73d05d 100644
--- a/drivers/net/txgbe/txgbe_tm.c
+++ b/drivers/net/txgbe/txgbe_tm.c
@@ -118,14 +118,14 @@ txgbe_tc_nb_get(struct rte_eth_dev *dev)
 	uint8_t nb_tcs = 0;
 
 	eth_conf = &dev->data->dev_conf;
-	if (eth_conf->txmode.mq_mode == ETH_MQ_TX_DCB) {
+	if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_DCB) {
 		nb_tcs = eth_conf->tx_adv_conf.dcb_tx_conf.nb_tcs;
-	} else if (eth_conf->txmode.mq_mode == ETH_MQ_TX_VMDQ_DCB) {
+	} else if (eth_conf->txmode.mq_mode == RTE_ETH_MQ_TX_VMDQ_DCB) {
 		if (eth_conf->tx_adv_conf.vmdq_dcb_tx_conf.nb_queue_pools ==
-		    ETH_32_POOLS)
-			nb_tcs = ETH_4_TCS;
+		    RTE_ETH_32_POOLS)
+			nb_tcs = RTE_ETH_4_TCS;
 		else
-			nb_tcs = ETH_8_TCS;
+			nb_tcs = RTE_ETH_8_TCS;
 	} else {
 		nb_tcs = 1;
 	}
@@ -364,10 +364,10 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 	if (vf_num) {
 		/* no DCB */
 		if (nb_tcs == 1) {
-			if (vf_num >= ETH_32_POOLS) {
+			if (vf_num >= RTE_ETH_32_POOLS) {
 				*nb = 2;
 				*base = vf_num * 2;
-			} else if (vf_num >= ETH_16_POOLS) {
+			} else if (vf_num >= RTE_ETH_16_POOLS) {
 				*nb = 4;
 				*base = vf_num * 4;
 			} else {
@@ -381,7 +381,7 @@ txgbe_queue_base_nb_get(struct rte_eth_dev *dev, uint16_t tc_node_no,
 		}
 	} else {
 		/* VT off */
-		if (nb_tcs == ETH_8_TCS) {
+		if (nb_tcs == RTE_ETH_8_TCS) {
 			switch (tc_node_no) {
 			case 0:
 				*base = 0;
diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c
index a202931e9aed..778460aab5e1 100644
--- a/drivers/net/vhost/rte_eth_vhost.c
+++ b/drivers/net/vhost/rte_eth_vhost.c
@@ -125,8 +125,8 @@ static pthread_mutex_t internal_list_lock = PTHREAD_MUTEX_INITIALIZER;
 
 static struct rte_eth_link pmd_link = {
 		.link_speed = 10000,
-		.link_duplex = ETH_LINK_FULL_DUPLEX,
-		.link_status = ETH_LINK_DOWN
+		.link_duplex = RTE_ETH_LINK_FULL_DUPLEX,
+		.link_status = RTE_ETH_LINK_DOWN
 };
 
 struct rte_vhost_vring_state {
@@ -823,7 +823,7 @@ new_device(int vid)
 
 	rte_vhost_get_mtu(vid, &eth_dev->data->mtu);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_UP;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP;
 
 	rte_atomic32_set(&internal->dev_attached, 1);
 	update_queuing_status(eth_dev);
@@ -858,7 +858,7 @@ destroy_device(int vid)
 	rte_atomic32_set(&internal->dev_attached, 0);
 	update_queuing_status(eth_dev);
 
-	eth_dev->data->dev_link.link_status = ETH_LINK_DOWN;
+	eth_dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
 
 	if (eth_dev->data->rx_queues && eth_dev->data->tx_queues) {
 		for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
@@ -1124,7 +1124,7 @@ eth_dev_configure(struct rte_eth_dev *dev)
 	if (vhost_driver_setup(dev) < 0)
 		return -1;
 
-	internal->vlan_strip = !!(rxmode->offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -1273,9 +1273,9 @@ eth_dev_info(struct rte_eth_dev *dev,
 	dev_info->max_tx_queues = internal->max_queues;
 	dev_info->min_rx_bufsize = 0;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				DEV_TX_OFFLOAD_VLAN_INSERT;
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 
 	return 0;
 }
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index e58085a2c95a..00bbbb2b3537 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -703,7 +703,7 @@ int
 virtio_dev_close(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "virtio_dev_close");
 	if (rte_eal_process_type() != RTE_PROC_PRIMARY)
@@ -1763,7 +1763,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		     hw->mac_addr[0], hw->mac_addr[1], hw->mac_addr[2],
 		     hw->mac_addr[3], hw->mac_addr[4], hw->mac_addr[5]);
 
-	if (hw->speed == ETH_SPEED_NUM_UNKNOWN) {
+	if (hw->speed == RTE_ETH_SPEED_NUM_UNKNOWN) {
 		if (virtio_with_feature(hw, VIRTIO_NET_F_SPEED_DUPLEX)) {
 			config = &local_config;
 			virtio_read_dev_config(hw,
@@ -1777,7 +1777,7 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features)
 		}
 	}
 	if (hw->duplex == DUPLEX_UNKNOWN)
-		hw->duplex = ETH_LINK_FULL_DUPLEX;
+		hw->duplex = RTE_ETH_LINK_FULL_DUPLEX;
 	PMD_INIT_LOG(DEBUG, "link speed = %d, duplex = %d",
 		hw->speed, hw->duplex);
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ)) {
@@ -1876,7 +1876,7 @@ int
 eth_virtio_dev_init(struct rte_eth_dev *eth_dev)
 {
 	struct virtio_hw *hw = eth_dev->data->dev_private;
-	uint32_t speed = ETH_SPEED_NUM_UNKNOWN;
+	uint32_t speed = RTE_ETH_SPEED_NUM_UNKNOWN;
 	int vectorized = 0;
 	int ret;
 
@@ -1948,22 +1948,22 @@ static uint32_t
 virtio_dev_speed_capa_get(uint32_t speed)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -2079,14 +2079,14 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	PMD_INIT_LOG(DEBUG, "configure");
 	req_features = VIRTIO_PMD_DEFAULT_GUEST_FEATURES;
 
-	if (rxmode->mq_mode != ETH_MQ_RX_NONE) {
+	if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Rx multi queue mode %d",
 			rxmode->mq_mode);
 		return -EINVAL;
 	}
 
-	if (txmode->mq_mode != ETH_MQ_TX_NONE) {
+	if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
 		PMD_DRV_LOG(ERR,
 			"Unsupported Tx multi queue mode %d",
 			txmode->mq_mode);
@@ -2104,20 +2104,20 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 
 	hw->max_rx_pkt_len = rxmode->max_rx_pkt_len;
 
-	if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			   DEV_RX_OFFLOAD_TCP_CKSUM))
+	if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_RX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_GUEST_CSUM);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 			(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 
-	if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM |
-			   DEV_TX_OFFLOAD_TCP_CKSUM))
+	if (tx_offloads & (RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			   RTE_ETH_TX_OFFLOAD_TCP_CKSUM))
 		req_features |= (1ULL << VIRTIO_NET_F_CSUM);
 
-	if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO)
+	if (tx_offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO)
 		req_features |=
 			(1ULL << VIRTIO_NET_F_HOST_TSO4) |
 			(1ULL << VIRTIO_NET_F_HOST_TSO6);
@@ -2129,15 +2129,15 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 			return ret;
 	}
 
-	if ((rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-			    DEV_RX_OFFLOAD_TCP_CKSUM)) &&
+	if ((rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+			    RTE_ETH_RX_OFFLOAD_TCP_CKSUM)) &&
 		!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM)) {
 		PMD_DRV_LOG(ERR,
 			"rx checksum not available on this host");
 		return -ENOTSUP;
 	}
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) &&
 		(!virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) ||
 		 !virtio_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6))) {
 		PMD_DRV_LOG(ERR,
@@ -2149,12 +2149,12 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 	if (virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VQ))
 		virtio_dev_cq_start(dev);
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 		hw->vlan_strip = 1;
 
-	hw->rx_ol_scatter = (rx_offloads & DEV_RX_OFFLOAD_SCATTER);
+	hw->rx_ol_scatter = (rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER);
 
-	if ((rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if ((rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 			!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 		PMD_DRV_LOG(ERR,
 			    "vlan filtering not available on this host");
@@ -2207,7 +2207,7 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+			if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 				PMD_DRV_LOG(INFO,
 					"disabled packed ring vectorized rx for TCP_LRO enabled");
 				hw->use_vec_rx = 0;
@@ -2234,10 +2234,10 @@ virtio_dev_configure(struct rte_eth_dev *dev)
 				hw->use_vec_rx = 0;
 			}
 
-			if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_CKSUM |
-					   DEV_RX_OFFLOAD_TCP_LRO |
-					   DEV_RX_OFFLOAD_VLAN_STRIP)) {
+			if (rx_offloads & (RTE_ETH_RX_OFFLOAD_UDP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+					   RTE_ETH_RX_OFFLOAD_TCP_LRO |
+					   RTE_ETH_RX_OFFLOAD_VLAN_STRIP)) {
 				PMD_DRV_LOG(INFO,
 					"disabled split ring vectorized rx for offloading enabled");
 				hw->use_vec_rx = 0;
@@ -2401,7 +2401,7 @@ virtio_dev_stop(struct rte_eth_dev *dev)
 {
 	struct virtio_hw *hw = dev->data->dev_private;
 	struct rte_eth_link link;
-	struct rte_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
+	struct rte_eth_intr_conf *intr_conf = &dev->data->dev_conf.intr_conf;
 
 	PMD_INIT_LOG(DEBUG, "stop");
 	dev->data->dev_started = 0;
@@ -2440,28 +2440,28 @@ virtio_dev_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complet
 	memset(&link, 0, sizeof(link));
 	link.link_duplex = hw->duplex;
 	link.link_speed  = hw->speed;
-	link.link_autoneg = ETH_LINK_AUTONEG;
+	link.link_autoneg = RTE_ETH_LINK_AUTONEG;
 
 	if (!hw->started) {
-		link.link_status = ETH_LINK_DOWN;
-		link.link_speed = ETH_SPEED_NUM_NONE;
+		link.link_status = RTE_ETH_LINK_DOWN;
+		link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 	} else if (virtio_with_feature(hw, VIRTIO_NET_F_STATUS)) {
 		PMD_INIT_LOG(DEBUG, "Get link status from hw");
 		virtio_read_dev_config(hw,
 				offsetof(struct virtio_net_config, status),
 				&status, sizeof(status));
 		if ((status & VIRTIO_NET_S_LINK_UP) == 0) {
-			link.link_status = ETH_LINK_DOWN;
-			link.link_speed = ETH_SPEED_NUM_NONE;
+			link.link_status = RTE_ETH_LINK_DOWN;
+			link.link_speed = RTE_ETH_SPEED_NUM_NONE;
 			PMD_INIT_LOG(DEBUG, "Port %d is down",
 				     dev->data->port_id);
 		} else {
-			link.link_status = ETH_LINK_UP;
+			link.link_status = RTE_ETH_LINK_UP;
 			PMD_INIT_LOG(DEBUG, "Port %d is up",
 				     dev->data->port_id);
 		}
 	} else {
-		link.link_status = ETH_LINK_UP;
+		link.link_status = RTE_ETH_LINK_UP;
 	}
 
 	return rte_eth_linkstatus_set(dev, &link);
@@ -2474,8 +2474,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	struct virtio_hw *hw = dev->data->dev_private;
 	uint64_t offloads = rxmode->offloads;
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if ((offloads & DEV_RX_OFFLOAD_VLAN_FILTER) &&
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if ((offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) &&
 				!virtio_with_feature(hw, VIRTIO_NET_F_CTRL_VLAN)) {
 
 			PMD_DRV_LOG(NOTICE,
@@ -2485,8 +2485,8 @@ virtio_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 		}
 	}
 
-	if (mask & ETH_VLAN_STRIP_MASK)
-		hw->vlan_strip = !!(offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	if (mask & RTE_ETH_VLAN_STRIP_MASK)
+		hw->vlan_strip = !!(offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 
 	return 0;
 }
@@ -2508,33 +2508,33 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
 	dev_info->max_mtu = hw->max_mtu;
 
 	host_features = VIRTIO_OPS(hw)->get_features(hw);
-	dev_info->rx_offload_capa = DEV_RX_OFFLOAD_VLAN_STRIP;
-	dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+	dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	if (host_features & (1ULL << VIRTIO_NET_F_MRG_RXBUF))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_SCATTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SCATTER;
 	if (host_features & (1ULL << VIRTIO_NET_F_GUEST_CSUM)) {
 		dev_info->rx_offload_capa |=
-			DEV_RX_OFFLOAD_TCP_CKSUM |
-			DEV_RX_OFFLOAD_UDP_CKSUM;
+			RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
+			RTE_ETH_RX_OFFLOAD_UDP_CKSUM;
 	}
 	if (host_features & (1ULL << VIRTIO_NET_F_CTRL_VLAN))
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_VLAN_FILTER;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 	tso_mask = (1ULL << VIRTIO_NET_F_GUEST_TSO4) |
 		(1ULL << VIRTIO_NET_F_GUEST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_TCP_LRO;
+		dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_TCP_LRO;
 
-	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS |
-				    DEV_TX_OFFLOAD_VLAN_INSERT;
+	dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+				    RTE_ETH_TX_OFFLOAD_VLAN_INSERT;
 	if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) {
 		dev_info->tx_offload_capa |=
-			DEV_TX_OFFLOAD_UDP_CKSUM |
-			DEV_TX_OFFLOAD_TCP_CKSUM;
+			RTE_ETH_TX_OFFLOAD_UDP_CKSUM |
+			RTE_ETH_TX_OFFLOAD_TCP_CKSUM;
 	}
 	tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) |
 		(1ULL << VIRTIO_NET_F_HOST_TSO6);
 	if ((host_features & tso_mask) == tso_mask)
-		dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO;
+		dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;
 
 	return 0;
 }
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 1a3291273a11..825a6adfc2b1 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -41,21 +41,21 @@
 #define	VMXNET3_TX_MAX_SEG	UINT8_MAX
 
 #define VMXNET3_TX_OFFLOAD_CAP		\
-	(DEV_TX_OFFLOAD_VLAN_INSERT |	\
-	 DEV_TX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_TX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_TX_OFFLOAD_TCP_TSO |	\
-	 DEV_TX_OFFLOAD_MULTI_SEGS)
+	(RTE_ETH_TX_OFFLOAD_VLAN_INSERT |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_TX_OFFLOAD_TCP_TSO |	\
+	 RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
 
 #define VMXNET3_RX_OFFLOAD_CAP		\
-	(DEV_RX_OFFLOAD_VLAN_STRIP |	\
-	 DEV_RX_OFFLOAD_VLAN_FILTER |   \
-	 DEV_RX_OFFLOAD_SCATTER |	\
-	 DEV_RX_OFFLOAD_UDP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_CKSUM |	\
-	 DEV_RX_OFFLOAD_TCP_LRO |	\
-	 DEV_RX_OFFLOAD_JUMBO_FRAME |   \
-	 DEV_RX_OFFLOAD_RSS_HASH)
+	(RTE_ETH_RX_OFFLOAD_VLAN_STRIP |	\
+	 RTE_ETH_RX_OFFLOAD_VLAN_FILTER |   \
+	 RTE_ETH_RX_OFFLOAD_SCATTER |	\
+	 RTE_ETH_RX_OFFLOAD_UDP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_CKSUM |	\
+	 RTE_ETH_RX_OFFLOAD_TCP_LRO |	\
+	 RTE_ETH_RX_OFFLOAD_JUMBO_FRAME |   \
+	 RTE_ETH_RX_OFFLOAD_RSS_HASH)
 
 int vmxnet3_segs_dynfield_offset = -1;
 
@@ -399,9 +399,9 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
 
 	/* set the initial link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(eth_dev, &link);
 
 	return 0;
@@ -487,8 +487,8 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 
 	PMD_INIT_FUNC_TRACE();
 
-	if (dev->data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG)
-		dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev->data->dev_conf.rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG)
+		dev->data->dev_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	if (dev->data->nb_tx_queues > VMXNET3_MAX_TX_QUEUES ||
 	    dev->data->nb_rx_queues > VMXNET3_MAX_RX_QUEUES) {
@@ -548,7 +548,7 @@ vmxnet3_dev_configure(struct rte_eth_dev *dev)
 	hw->queueDescPA = mz->iova;
 	hw->queue_desc_len = (uint16_t)size;
 
-	if (dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Allocate memory structure for UPT1_RSSConf and configure */
 		mz = gpa_zone_reserve(dev, sizeof(struct VMXNET3_RSSConf),
 				      "rss_conf", rte_socket_id(),
@@ -844,15 +844,15 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	devRead->rxFilterConf.rxMode = 0;
 
 	/* Setting up feature flags */
-	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_CHECKSUM)
 		devRead->misc.uptFeatures |= VMXNET3_F_RXCSUM;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		devRead->misc.uptFeatures |= VMXNET3_F_LRO;
 		devRead->misc.maxNumRxSG = 0;
 	}
 
-	if (port_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	if (port_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		ret = vmxnet3_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS)
 			return ret;
@@ -864,7 +864,7 @@ vmxnet3_setup_driver_shared(struct rte_eth_dev *dev)
 	}
 
 	ret = vmxnet3_dev_vlan_offload_set(dev,
-			ETH_VLAN_STRIP_MASK | ETH_VLAN_FILTER_MASK);
+			RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK);
 	if (ret)
 		return ret;
 
@@ -931,7 +931,7 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
 	}
 
 	if (VMXNET3_VERSION_GE_4(hw) &&
-	    dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) {
+	    dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
 		/* Check for additional RSS  */
 		ret = vmxnet3_v4_rss_configure(dev);
 		if (ret != VMXNET3_SUCCESS) {
@@ -1040,9 +1040,9 @@ vmxnet3_dev_stop(struct rte_eth_dev *dev)
 
 	/* Clear recorded link status */
 	memset(&link, 0, sizeof(link));
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 	rte_eth_linkstatus_set(dev, &link);
 
 	hw->adapter_stopped = 1;
@@ -1372,7 +1372,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
 	dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
 	dev_info->min_mtu = VMXNET3_MIN_MTU;
 	dev_info->max_mtu = VMXNET3_MAX_MTU;
-	dev_info->speed_capa = ETH_LINK_SPEED_10G;
+	dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
 	dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
 
 	dev_info->flow_type_rss_offloads = VMXNET3_RSS_OFFLOAD_ALL;
@@ -1454,10 +1454,10 @@ __vmxnet3_dev_link_update(struct rte_eth_dev *dev,
 	ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
 
 	if (ret & 0x1)
-		link.link_status = ETH_LINK_UP;
-	link.link_duplex = ETH_LINK_FULL_DUPLEX;
-	link.link_speed = ETH_SPEED_NUM_10G;
-	link.link_autoneg = ETH_LINK_FIXED;
+		link.link_status = RTE_ETH_LINK_UP;
+	link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
+	link.link_speed = RTE_ETH_SPEED_NUM_10G;
+	link.link_autoneg = RTE_ETH_LINK_FIXED;
 
 	return rte_eth_linkstatus_set(dev, &link);
 }
@@ -1510,7 +1510,7 @@ vmxnet3_dev_promiscuous_disable(struct rte_eth_dev *dev)
 	uint32_t *vf_table = hw->shared->devRead.rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 		memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 	else
 		memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
@@ -1580,8 +1580,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 	uint32_t *vf_table = devRead->rxFilterConf.vfTable;
 	uint64_t rx_offloads = dev->data->dev_conf.rxmode.offloads;
 
-	if (mask & ETH_VLAN_STRIP_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+	if (mask & RTE_ETH_VLAN_STRIP_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
 			devRead->misc.uptFeatures |= UPT1_F_RXVLAN;
 		else
 			devRead->misc.uptFeatures &= ~UPT1_F_RXVLAN;
@@ -1590,8 +1590,8 @@ vmxnet3_dev_vlan_offload_set(struct rte_eth_dev *dev, int mask)
 				       VMXNET3_CMD_UPDATE_FEATURE);
 	}
 
-	if (mask & ETH_VLAN_FILTER_MASK) {
-		if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+	if (mask & RTE_ETH_VLAN_FILTER_MASK) {
+		if (rx_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
 			memcpy(vf_table, hw->shadow_vfta, VMXNET3_VFT_TABLE_SIZE);
 		else
 			memset(vf_table, 0xff, VMXNET3_VFT_TABLE_SIZE);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 59bee9723cfc..7588ba929b65 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -32,18 +32,18 @@
 				VMXNET3_MAX_RX_QUEUES + 1)
 
 #define VMXNET3_RSS_OFFLOAD_ALL ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 #define VMXNET3_V4_RSS_MASK ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP)
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 
 #define VMXNET3_MANDATORY_V4_RSS ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP)
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 
 /* RSS configuration structure - shared with device through GPA */
 typedef struct VMXNET3_RSSConf {
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 5cf53d4de825..0f2671f528f4 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1326,13 +1326,13 @@ vmxnet3_v4_rss_configure(struct rte_eth_dev *dev)
 	rss_hf = port_rss_conf->rss_hf &
 		(VMXNET3_V4_RSS_MASK | VMXNET3_RSS_OFFLOAD_ALL);
 
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_TCPIP6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_UDP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP)
 		cmdInfo->setRSSFields |= VMXNET3_RSS_FIELDS_UDPIP6;
 
 	VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD,
@@ -1389,13 +1389,13 @@ vmxnet3_rss_configure(struct rte_eth_dev *dev)
 	/* loading hashType */
 	dev_rss_conf->hashType = 0;
 	rss_hf = port_rss_conf->rss_hf & VMXNET3_RSS_OFFLOAD_ALL;
-	if (rss_hf & ETH_RSS_IPV4)
+	if (rss_hf & RTE_ETH_RSS_IPV4)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV4;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV4_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV4;
-	if (rss_hf & ETH_RSS_IPV6)
+	if (rss_hf & RTE_ETH_RSS_IPV6)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_IPV6;
-	if (rss_hf & ETH_RSS_NONFRAG_IPV6_TCP)
+	if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP)
 		dev_rss_conf->hashType |= VMXNET3_RSS_HASH_TYPE_TCP_IPV6;
 
 	return VMXNET3_SUCCESS;
diff --git a/examples/bbdev_app/main.c b/examples/bbdev_app/main.c
index 5251db0b1674..ecc6ef2965ee 100644
--- a/examples/bbdev_app/main.c
+++ b/examples/bbdev_app/main.c
@@ -71,12 +71,12 @@ mbuf_input(struct rte_mbuf *mbuf)
 
 static const struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -334,7 +334,7 @@ check_port_link_status(uint16_t port_id)
 
 		if (link_get_err >= 0 && link.link_status) {
 			const char *dp = (link.link_duplex ==
-				ETH_LINK_FULL_DUPLEX) ?
+				RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex";
 			printf("\nPort %u Link Up - speed %s - %s\n",
 				port_id,
diff --git a/examples/bond/main.c b/examples/bond/main.c
index f48400e21156..e4c627e203a4 100644
--- a/examples/bond/main.c
+++ b/examples/bond/main.c
@@ -116,18 +116,18 @@ static struct rte_mempool *mbuf_pool;
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -151,9 +151,9 @@ slave_port_init(uint16_t portid, struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
@@ -243,9 +243,9 @@ bond_port_init(struct rte_mempool *mbuf_pool)
 			"Error during getting device (port %u) info: %s\n",
 			BOND_PORT, strerror(-retval));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(BOND_PORT, 1, 1, &local_port_conf);
 	if (retval != 0)
 		rte_exit(EXIT_FAILURE, "port %u: configuration failed (res=%d)\n",
diff --git a/examples/distributor/main.c b/examples/distributor/main.c
index 1b1029660e77..e6af8420e4c6 100644
--- a/examples/distributor/main.c
+++ b/examples/distributor/main.c
@@ -80,16 +80,16 @@ struct app_stats prev_app_stats;
 
 static const struct rte_eth_conf port_conf_default = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		}
 	},
 };
@@ -127,9 +127,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
diff --git a/examples/ethtool/ethtool-app/main.c b/examples/ethtool/ethtool-app/main.c
index 21ed85c7d6c9..5053d174335c 100644
--- a/examples/ethtool/ethtool-app/main.c
+++ b/examples/ethtool/ethtool-app/main.c
@@ -98,7 +98,7 @@ static void setup_ports(struct app_config *app_cfg, int cnt_ports)
 	int ret;
 
 	memset(&cfg_port, 0, sizeof(cfg_port));
-	cfg_port.txmode.mq_mode = ETH_MQ_TX_NONE;
+	cfg_port.txmode.mq_mode = RTE_ETH_MQ_TX_NONE;
 
 	for (idx_port = 0; idx_port < cnt_ports; idx_port++) {
 		struct app_port *ptr_port = &app_cfg->ports[idx_port];
diff --git a/examples/ethtool/lib/rte_ethtool.c b/examples/ethtool/lib/rte_ethtool.c
index 413251630709..e7cdf8d5775b 100644
--- a/examples/ethtool/lib/rte_ethtool.c
+++ b/examples/ethtool/lib/rte_ethtool.c
@@ -233,13 +233,13 @@ rte_ethtool_get_pauseparam(uint16_t port_id,
 	pause_param->tx_pause = 0;
 	pause_param->rx_pause = 0;
 	switch (fc_conf.mode) {
-	case RTE_FC_RX_PAUSE:
+	case RTE_ETH_FC_RX_PAUSE:
 		pause_param->rx_pause = 1;
 		break;
-	case RTE_FC_TX_PAUSE:
+	case RTE_ETH_FC_TX_PAUSE:
 		pause_param->tx_pause = 1;
 		break;
-	case RTE_FC_FULL:
+	case RTE_ETH_FC_FULL:
 		pause_param->rx_pause = 1;
 		pause_param->tx_pause = 1;
 	default:
@@ -277,14 +277,14 @@ rte_ethtool_set_pauseparam(uint16_t port_id,
 
 	if (pause_param->tx_pause) {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_FULL;
+			fc_conf.mode = RTE_ETH_FC_FULL;
 		else
-			fc_conf.mode = RTE_FC_TX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_TX_PAUSE;
 	} else {
 		if (pause_param->rx_pause)
-			fc_conf.mode = RTE_FC_RX_PAUSE;
+			fc_conf.mode = RTE_ETH_FC_RX_PAUSE;
 		else
-			fc_conf.mode = RTE_FC_NONE;
+			fc_conf.mode = RTE_ETH_FC_NONE;
 	}
 
 	status = rte_eth_dev_flow_ctrl_set(port_id, &fc_conf);
@@ -398,12 +398,12 @@ rte_ethtool_net_set_rx_mode(uint16_t port_id)
 	for (vf = 0; vf < num_vfs; vf++) {
 #ifdef RTE_NET_IXGBE
 		rte_pmd_ixgbe_set_vf_rxmode(port_id, vf,
-			ETH_VMDQ_ACCEPT_UNTAG, 0);
+			RTE_ETH_VMDQ_ACCEPT_UNTAG, 0);
 #endif
 	}
 
 	/* Enable Rx vlan filter, VF unspport status is discard */
-	ret = rte_eth_dev_set_vlan_offload(port_id, ETH_VLAN_FILTER_MASK);
+	ret = rte_eth_dev_set_vlan_offload(port_id, RTE_ETH_VLAN_FILTER_MASK);
 	if (ret != 0)
 		return ret;
 
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index f70ab0cc9e38..3ac98add5692 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -283,14 +283,14 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -312,12 +312,12 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_RSS_HASH)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_RSS_HASH;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_RSS_HASH)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH;
 
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c
index ca6cd200caad..5780928d75ee 100644
--- a/examples/eventdev_pipeline/pipeline_worker_tx.c
+++ b/examples/eventdev_pipeline/pipeline_worker_tx.c
@@ -614,14 +614,14 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 	struct rte_eth_rxconf rx_conf;
 	static const struct rte_eth_conf port_conf_default = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
-				.rss_hf = ETH_RSS_IP |
-					  ETH_RSS_TCP |
-					  ETH_RSS_UDP,
+				.rss_hf = RTE_ETH_RSS_IP |
+					  RTE_ETH_RSS_TCP |
+					  RTE_ETH_RSS_UDP,
 			}
 		}
 	};
@@ -643,9 +643,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	rx_conf = dev_info.default_rxconf;
 	rx_conf.offloads = port_conf.rxmode.offloads;
 
diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c
index db71f5aa0401..f44ee65372ff 100644
--- a/examples/flow_classify/flow_classify.c
+++ b/examples/flow_classify/flow_classify.c
@@ -218,9 +218,9 @@ port_init(uint8_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/flow_filtering/main.c b/examples/flow_filtering/main.c
index 29fb4b3d55ef..150406e385d4 100644
--- a/examples/flow_filtering/main.c
+++ b/examples/flow_filtering/main.c
@@ -113,7 +113,7 @@ assert_link_status(void)
 	memset(&link, 0, sizeof(link));
 	do {
 		link_get_err = rte_eth_link_get(port_id, &link);
-		if (link_get_err == 0 && link.link_status == ETH_LINK_UP)
+		if (link_get_err == 0 && link.link_status == RTE_ETH_LINK_UP)
 			break;
 		rte_delay_ms(CHECK_INTERVAL);
 	} while (--rep_cnt);
@@ -121,7 +121,7 @@ assert_link_status(void)
 	if (link_get_err < 0)
 		rte_exit(EXIT_FAILURE, ":: error: link get is failing: %s\n",
 			 rte_strerror(-link_get_err));
-	if (link.link_status == ETH_LINK_DOWN)
+	if (link.link_status == RTE_ETH_LINK_DOWN)
 		rte_exit(EXIT_FAILURE, ":: error: link is still down\n");
 }
 
@@ -138,12 +138,12 @@ init_port(void)
 		},
 		.txmode = {
 			.offloads =
-				DEV_TX_OFFLOAD_VLAN_INSERT |
-				DEV_TX_OFFLOAD_IPV4_CKSUM  |
-				DEV_TX_OFFLOAD_UDP_CKSUM   |
-				DEV_TX_OFFLOAD_TCP_CKSUM   |
-				DEV_TX_OFFLOAD_SCTP_CKSUM  |
-				DEV_TX_OFFLOAD_TCP_TSO,
+				RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+				RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_UDP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_TCP_CKSUM   |
+				RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  |
+				RTE_ETH_TX_OFFLOAD_TCP_TSO,
 		},
 	};
 	struct rte_eth_txconf txq_conf;
diff --git a/examples/ioat/ioatfwd.c b/examples/ioat/ioatfwd.c
index 0c413180f889..94e3ac91b299 100644
--- a/examples/ioat/ioatfwd.c
+++ b/examples/ioat/ioatfwd.c
@@ -819,13 +819,13 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 	/* Configuring port to use RSS for multiple RX queues. 8< */
 	static const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 			.max_rx_pkt_len = RTE_ETHER_MAX_LEN
 		},
 		.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_PROTO_MASK,
+				.rss_hf = RTE_ETH_RSS_PROTO_MASK,
 			}
 		}
 	};
@@ -853,9 +853,9 @@ port_init(uint16_t portid, struct rte_mempool *mbuf_pool, uint16_t nb_queues)
 
 	local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 		dev_info.flow_type_rss_offloads;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, nb_queues, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Cannot configure device:"
diff --git a/examples/ip_fragmentation/main.c b/examples/ip_fragmentation/main.c
index f24536972084..aa41fcc1d037 100644
--- a/examples/ip_fragmentation/main.c
+++ b/examples/ip_fragmentation/main.c
@@ -148,14 +148,14 @@ static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
 		.split_hdr_size = 0,
-		.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
-			     DEV_RX_OFFLOAD_SCATTER |
-			     DEV_RX_OFFLOAD_JUMBO_FRAME),
+		.offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+			     RTE_ETH_RX_OFFLOAD_SCATTER |
+			     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME),
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -624,7 +624,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ip_pipeline/link.c b/examples/ip_pipeline/link.c
index 16bcffe356bc..f6ecd9b0fe3a 100644
--- a/examples/ip_pipeline/link.c
+++ b/examples/ip_pipeline/link.c
@@ -45,7 +45,7 @@ link_next(struct link *link)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -57,12 +57,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -77,11 +77,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -139,7 +139,7 @@ link_create(const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -157,9 +157,9 @@ link_create(const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -267,5 +267,5 @@ link_is_up(const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
diff --git a/examples/ip_reassembly/main.c b/examples/ip_reassembly/main.c
index 8645ac790be4..8aabea002bbb 100644
--- a/examples/ip_reassembly/main.c
+++ b/examples/ip_reassembly/main.c
@@ -161,22 +161,22 @@ static struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
 		.split_hdr_size = 0,
-		.offloads = (DEV_RX_OFFLOAD_CHECKSUM |
-			     DEV_RX_OFFLOAD_JUMBO_FRAME),
+		.offloads = (RTE_ETH_RX_OFFLOAD_CHECKSUM |
+			     RTE_ETH_RX_OFFLOAD_JUMBO_FRAME),
 	},
 	.rx_adv_conf = {
 			.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_MULTI_SEGS),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS),
 	},
 };
 
@@ -740,7 +740,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -1097,9 +1097,9 @@ main(int argc, char **argv)
 		n_tx_queue = nb_lcores;
 		if (n_tx_queue > MAX_TX_QUEUE_PER_PORT)
 			n_tx_queue = MAX_TX_QUEUE_PER_PORT;
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index f252d34985b4..73932564e459 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -234,20 +234,20 @@ static struct lcore_conf lcore_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1456,10 +1456,10 @@ print_usage(const char *prgname)
 		"               \"parallel\" : Parallel\n"
 		"  --" CMD_LINE_OPT_RX_OFFLOAD
 		": bitmask of the RX HW offload capabilities to enable/use\n"
-		"                         (DEV_RX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_RX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_TX_OFFLOAD
 		": bitmask of the TX HW offload capabilities to enable/use\n"
-		"                         (DEV_TX_OFFLOAD_*)\n"
+		"                         (RTE_ETH_TX_OFFLOAD_*)\n"
 		"  --" CMD_LINE_OPT_REASSEMBLE " NUM"
 		": max number of entries in reassemble(fragment) table\n"
 		"    (zero (default value) disables reassembly)\n"
@@ -1908,7 +1908,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2211,12 +2211,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 
 	frame_size = MTU_TO_FRAMELEN(mtu_size);
 	if (frame_size > local_port_conf.rxmode.max_rx_pkt_len)
-		local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	local_port_conf.rxmode.max_rx_pkt_len = frame_size;
 
 	if (multi_seg_required()) {
-		local_port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_SCATTER;
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+		local_port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 	}
 
 	local_port_conf.rxmode.offloads |= req_rx_offloads;
@@ -2239,12 +2239,12 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 			portid, local_port_conf.txmode.offloads,
 			dev_info.tx_offload_capa);
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_IPV4_CKSUM)
-		local_port_conf.txmode.offloads |= DEV_TX_OFFLOAD_IPV4_CKSUM;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
+		local_port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM;
 
 	printf("port %u configurng rx_offloads=0x%" PRIx64
 		", tx_offloads=0x%" PRIx64 "\n",
@@ -2302,7 +2302,7 @@ port_init(uint16_t portid, uint64_t req_rx_offloads, uint64_t req_tx_offloads)
 		/* Pre-populate pkt offloads based on capabilities */
 		qconf->outbound.ipv4_offloads = PKT_TX_IPV4;
 		qconf->outbound.ipv6_offloads = PKT_TX_IPV6;
-		if (local_port_conf.txmode.offloads & DEV_TX_OFFLOAD_IPV4_CKSUM)
+		if (local_port_conf.txmode.offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
 			qconf->outbound.ipv4_offloads |= PKT_TX_IP_CKSUM;
 
 		tx_queueid++;
@@ -2663,7 +2663,7 @@ create_default_ipsec_flow(uint16_t port_id, uint64_t rx_offloads)
 	struct rte_flow *flow;
 	int ret;
 
-	if (!(rx_offloads & DEV_RX_OFFLOAD_SECURITY))
+	if (!(rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
 		return;
 
 	/* Add the default rte_flow to enable SECURITY for all ESP packets */
diff --git a/examples/ipsec-secgw/sa.c b/examples/ipsec-secgw/sa.c
index 17a28556c971..5cdd794f017f 100644
--- a/examples/ipsec-secgw/sa.c
+++ b/examples/ipsec-secgw/sa.c
@@ -986,7 +986,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	if (inbound) {
 		if ((dev_info.rx_offload_capa &
-				DEV_RX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_RX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware RX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -994,7 +994,7 @@ check_eth_dev_caps(uint16_t portid, uint32_t inbound)
 
 	} else { /* outbound */
 		if ((dev_info.tx_offload_capa &
-				DEV_TX_OFFLOAD_SECURITY) == 0) {
+				RTE_ETH_TX_OFFLOAD_SECURITY) == 0) {
 			RTE_LOG(WARNING, PORT,
 				"hardware TX IPSec offload is not supported\n");
 			return -EINVAL;
@@ -1628,7 +1628,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*rx_offloads |= DEV_RX_OFFLOAD_SECURITY;
+			*rx_offloads |= RTE_ETH_RX_OFFLOAD_SECURITY;
 	}
 
 	/* Check for outbound rules that use offloads and use this port */
@@ -1639,7 +1639,7 @@ sa_check_offloads(uint16_t port_id, uint64_t *rx_offloads,
 				rule_type ==
 				RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
 				&& rule->portid == port_id)
-			*tx_offloads |= DEV_TX_OFFLOAD_SECURITY;
+			*tx_offloads |= RTE_ETH_TX_OFFLOAD_SECURITY;
 	}
 	return 0;
 }
diff --git a/examples/ipv4_multicast/main.c b/examples/ipv4_multicast/main.c
index cc527d7f6b38..96fb325ff180 100644
--- a/examples/ipv4_multicast/main.c
+++ b/examples/ipv4_multicast/main.c
@@ -112,11 +112,11 @@ static struct rte_eth_conf port_conf = {
 	.rxmode = {
 		.max_rx_pkt_len = JUMBO_FRAME_MAX_SIZE,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_JUMBO_FRAME,
+		.offloads = RTE_ETH_RX_OFFLOAD_JUMBO_FRAME,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = DEV_TX_OFFLOAD_MULTI_SEGS,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = RTE_ETH_TX_OFFLOAD_MULTI_SEGS,
 	},
 };
 
@@ -620,7 +620,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/kni/main.c b/examples/kni/main.c
index beabb3c848aa..81124dc0dc88 100644
--- a/examples/kni/main.c
+++ b/examples/kni/main.c
@@ -95,7 +95,7 @@ static struct kni_port_params *kni_port_params_array[RTE_MAX_ETHPORTS];
 /* Options for configuring ethernet port */
 static struct rte_eth_conf port_conf = {
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -608,9 +608,9 @@ init_port(uint16_t port)
 			"Error during getting device (port %u) info: %s\n",
 			port, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE, "Could not configure port%u (%d)\n",
@@ -688,7 +688,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -792,9 +792,9 @@ kni_change_mtu_(uint16_t port_id, unsigned int new_mtu)
 	memcpy(&conf, &port_conf, sizeof(conf));
 	/* Set new MTU */
 	if (new_mtu > RTE_ETHER_MAX_LEN)
-		conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+		conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 	else
-		conf.rxmode.offloads &= ~DEV_RX_OFFLOAD_JUMBO_FRAME;
+		conf.rxmode.offloads &= ~RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 
 	/* mtu + length of header + length of FCS = max pkt length */
 	conf.rxmode.max_rx_pkt_len = new_mtu + KNI_ENET_HEADER_SIZE +
diff --git a/examples/l2fwd-crypto/main.c b/examples/l2fwd-crypto/main.c
index 5f539c458cdd..89489843e2bd 100644
--- a/examples/l2fwd-crypto/main.c
+++ b/examples/l2fwd-crypto/main.c
@@ -216,12 +216,12 @@ struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1809,7 +1809,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2633,9 +2633,9 @@ initialize_ports(struct l2fwd_crypto_options *options)
 			return retval;
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		retval = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (retval < 0) {
 			printf("Cannot configure device: err=%d, port=%u\n",
diff --git a/examples/l2fwd-event/l2fwd_common.c b/examples/l2fwd-event/l2fwd_common.c
index b8c1e02d7598..80a72f7095cf 100644
--- a/examples/l2fwd-event/l2fwd_common.c
+++ b/examples/l2fwd-event/l2fwd_common.c
@@ -15,7 +15,7 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 			.split_hdr_size = 0,
 		},
 		.txmode = {
-			.mq_mode = ETH_MQ_TX_NONE,
+			.mq_mode = RTE_ETH_MQ_TX_NONE,
 		},
 	};
 	uint16_t nb_ports_available = 0;
@@ -23,9 +23,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 	int ret;
 
 	if (rsrc->event_mode) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_key = NULL;
-		port_conf.rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP;
+		port_conf.rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP;
 	}
 
 	/* Initialise each port */
@@ -61,9 +61,9 @@ l2fwd_event_init_ports(struct l2fwd_resources *rsrc)
 				local_port_conf.rx_adv_conf.rss_conf.rss_hf);
 		}
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queue. 8< */
 		ret = rte_eth_dev_configure(port_id, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-event/main.c b/examples/l2fwd-event/main.c
index 1db89f2bd139..9806204b81d1 100644
--- a/examples/l2fwd-event/main.c
+++ b/examples/l2fwd-event/main.c
@@ -395,7 +395,7 @@ check_all_ports_link_status(struct l2fwd_resources *rsrc,
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/l2fwd-jobstats/main.c b/examples/l2fwd-jobstats/main.c
index bbb4a27a6d54..2e50339afb61 100644
--- a/examples/l2fwd-jobstats/main.c
+++ b/examples/l2fwd-jobstats/main.c
@@ -94,7 +94,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -726,7 +726,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -869,9 +869,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l2fwd-keepalive/main.c b/examples/l2fwd-keepalive/main.c
index 4e1a17cfe4f5..d228a842788d 100644
--- a/examples/l2fwd-keepalive/main.c
+++ b/examples/l2fwd-keepalive/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -478,7 +478,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -650,9 +650,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
 			rte_exit(EXIT_FAILURE,
diff --git a/examples/l2fwd/main.c b/examples/l2fwd/main.c
index 911e40c66e0e..b4a69dde63dc 100644
--- a/examples/l2fwd/main.c
+++ b/examples/l2fwd/main.c
@@ -95,7 +95,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -606,7 +606,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -792,9 +792,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure the number of queues for a port. */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/l3fwd-acl/main.c b/examples/l3fwd-acl/main.c
index a1f457b564b6..9323426e9b1d 100644
--- a/examples/l3fwd-acl/main.c
+++ b/examples/l3fwd-acl/main.c
@@ -124,20 +124,20 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP | ETH_RSS_UDP |
-				ETH_RSS_TCP | ETH_RSS_SCTP,
+			.rss_hf = RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |
+				RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -1815,9 +1815,9 @@ parse_args(int argc, char **argv)
 
 			printf("jumbo frame is enabled\n");
 			port_conf.rxmode.offloads |=
-					DEV_RX_OFFLOAD_JUMBO_FRAME;
+					RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			port_conf.txmode.offloads |=
-					DEV_TX_OFFLOAD_MULTI_SEGS;
+					RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 			/*
 			 * if no max-pkt-len set, then use the
@@ -1970,7 +1970,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2080,9 +2080,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-graph/main.c b/examples/l3fwd-graph/main.c
index a0de8ca9b42d..278fe95970f3 100644
--- a/examples/l3fwd-graph/main.c
+++ b/examples/l3fwd-graph/main.c
@@ -111,18 +111,18 @@ static uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 				.rss_key = NULL,
-				.rss_hf = ETH_RSS_IP,
+				.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -494,8 +494,8 @@ parse_args(int argc, char **argv)
 			const struct option lenopts = {"max-pkt-len",
 						       required_argument, 0, 0};
 
-			port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-			port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+			port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+			port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 			/*
 			 * if no max-pkt-len set, use the default
@@ -628,7 +628,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* Clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -807,9 +807,9 @@ main(int argc, char **argv)
 		       nb_rx_queue, n_tx_queue);
 
 		rte_eth_dev_info_get(portid, &dev_info);
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index aa7b8db44ae8..85609e9d4593 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -250,19 +250,19 @@ uint16_t nb_lcore_params = RTE_DIM(lcore_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_RSS,
+		.mq_mode        = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_UDP,
+			.rss_hf = RTE_ETH_RSS_UDP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	}
 };
 
@@ -1961,9 +1961,9 @@ parse_args(int argc, char **argv)
 
 				printf("jumbo frame is enabled \n");
 				port_conf.rxmode.offloads |=
-						DEV_RX_OFFLOAD_JUMBO_FRAME;
+						RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 				port_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MULTI_SEGS;
+						RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 				/**
 				 * if no max-pkt-len set, use the default value
@@ -2222,7 +2222,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -2622,9 +2622,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/l3fwd_event.c b/examples/l3fwd/l3fwd_event.c
index 961860ea18ef..7c7613a83aad 100644
--- a/examples/l3fwd/l3fwd_event.c
+++ b/examples/l3fwd/l3fwd_event.c
@@ -75,9 +75,9 @@ l3fwd_eth_dev_port_setup(struct rte_eth_conf *port_conf)
 			rte_panic("Error during getting device (port %u) info:"
 				  "%s\n", port_id, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-						DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+						RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 						dev_info.flow_type_rss_offloads;
diff --git a/examples/l3fwd/main.c b/examples/l3fwd/main.c
index 00ac267af1dd..500444565463 100644
--- a/examples/l3fwd/main.c
+++ b/examples/l3fwd/main.c
@@ -120,19 +120,19 @@ static uint16_t nb_lcore_params = sizeof(lcore_params_array_default) /
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -703,8 +703,8 @@ parse_args(int argc, char **argv)
 				"max-pkt-len", required_argument, 0, 0
 			};
 
-			port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
-			port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+			port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
+			port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 			/*
 			 * if no max-pkt-len set, use the default
@@ -926,7 +926,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -1035,15 +1035,15 @@ l3fwd_poll_resource_setup(void)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
 
 		if (dev_info.max_rx_queues == 1)
-			local_port_conf.rxmode.mq_mode = ETH_MQ_RX_NONE;
+			local_port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_NONE;
 
 		if (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=
 				port_conf.rx_adv_conf.rss_conf.rss_hf) {
diff --git a/examples/link_status_interrupt/main.c b/examples/link_status_interrupt/main.c
index 7470aa539a90..6880b58476f4 100644
--- a/examples/link_status_interrupt/main.c
+++ b/examples/link_status_interrupt/main.c
@@ -83,7 +83,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.intr_conf = {
 		.lsc = 1, /**< lsc interrupt feature enabled */
@@ -147,7 +147,7 @@ print_stats(void)
 			   link_get_err < 0 ? "0" :
 			   rte_eth_link_speed_to_str(link.link_speed),
 			   link_get_err < 0 ? "Link get failed" :
-			   (link.link_duplex == ETH_LINK_FULL_DUPLEX ? \
+			   (link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
 					"full-duplex" : "half-duplex"),
 			   port_statistics[portid].tx,
 			   port_statistics[portid].rx,
@@ -507,7 +507,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -634,9 +634,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 		/* Configure RX and TX queues. 8< */
 		ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 		if (ret < 0)
diff --git a/examples/multi_process/client_server_mp/mp_server/init.c b/examples/multi_process/client_server_mp/mp_server/init.c
index 1ad71ca7ec5f..23307073c904 100644
--- a/examples/multi_process/client_server_mp/mp_server/init.c
+++ b/examples/multi_process/client_server_mp/mp_server/init.c
@@ -94,7 +94,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	const struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS
+			.mq_mode = RTE_ETH_MQ_RX_RSS
 		}
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_clients;
@@ -213,7 +213,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/multi_process/symmetric_mp/main.c b/examples/multi_process/symmetric_mp/main.c
index 01dc3acf34d5..85955375f1bf 100644
--- a/examples/multi_process/symmetric_mp/main.c
+++ b/examples/multi_process/symmetric_mp/main.c
@@ -176,18 +176,18 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 {
 	struct rte_eth_conf port_conf = {
 			.rxmode = {
-				.mq_mode	= ETH_MQ_RX_RSS,
+				.mq_mode	= RTE_ETH_MQ_RX_RSS,
 				.split_hdr_size = 0,
-				.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+				.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 			},
 			.rx_adv_conf = {
 				.rss_conf = {
 					.rss_key = NULL,
-					.rss_hf = ETH_RSS_IP,
+					.rss_hf = RTE_ETH_RSS_IP,
 				},
 			},
 			.txmode = {
-				.mq_mode = ETH_MQ_TX_NONE,
+				.mq_mode = RTE_ETH_MQ_TX_NONE,
 			}
 	};
 	const uint16_t rx_rings = num_queues, tx_rings = num_queues;
@@ -218,9 +218,9 @@ smp_port_init(uint16_t port, struct rte_mempool *mbuf_pool,
 
 	info.default_rxconf.rx_drop_en = 1;
 
-	if (info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &= info.flow_type_rss_offloads;
@@ -392,7 +392,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index e9a388710647..f110fc129f55 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -89,17 +89,17 @@ static uint16_t pkt_burst = NTB_DFLT_PKT_BURST;
 
 static struct rte_eth_conf eth_port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.split_hdr_size = 0,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
diff --git a/examples/packet_ordering/main.c b/examples/packet_ordering/main.c
index d2fe9f6b50d8..eb15899c902f 100644
--- a/examples/packet_ordering/main.c
+++ b/examples/packet_ordering/main.c
@@ -294,9 +294,9 @@ configure_eth_port(uint16_t port_id)
 		return ret;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(port_id, rxRings, txRings, &port_conf);
 	if (ret != 0)
 		return ret;
diff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c
index 2f593abf263d..86671655b432 100644
--- a/examples/performance-thread/l3fwd-thread/main.c
+++ b/examples/performance-thread/l3fwd-thread/main.c
@@ -307,19 +307,19 @@ static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);
 
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_RSS,
+		.mq_mode = RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_TCP,
+			.rss_hf = RTE_ETH_RSS_TCP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -2988,9 +2988,9 @@ parse_args(int argc, char **argv)
 
 			printf("jumbo frame is enabled - disabling simple TX path\n");
 			port_conf.rxmode.offloads |=
-					DEV_RX_OFFLOAD_JUMBO_FRAME;
+					RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 			port_conf.txmode.offloads |=
-					DEV_TX_OFFLOAD_MULTI_SEGS;
+					RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 			/* if no max-pkt-len set, use the default value
 			 * RTE_ETHER_MAX_LEN
@@ -3466,7 +3466,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
@@ -3577,9 +3577,9 @@ main(int argc, char **argv)
 				"Error during getting device (port %u) info: %s\n",
 				portid, strerror(-ret));
 
-		if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+		if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 			local_port_conf.txmode.offloads |=
-				DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+				RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 		local_port_conf.rx_adv_conf.rss_conf.rss_hf &=
 			dev_info.flow_type_rss_offloads;
diff --git a/examples/pipeline/obj.c b/examples/pipeline/obj.c
index 467cda5a6dac..7ea670f109b8 100644
--- a/examples/pipeline/obj.c
+++ b/examples/pipeline/obj.c
@@ -133,7 +133,7 @@ mempool_find(struct obj *obj, const char *name)
 static struct rte_eth_conf port_conf_default = {
 	.link_speeds = 0,
 	.rxmode = {
-		.mq_mode = ETH_MQ_RX_NONE,
+		.mq_mode = RTE_ETH_MQ_RX_NONE,
 		.max_rx_pkt_len = 9000, /* Jumbo frame max packet len */
 		.split_hdr_size = 0, /* Header split buffer size */
 	},
@@ -145,12 +145,12 @@ static struct rte_eth_conf port_conf_default = {
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.lpbk_mode = 0,
 };
 
-#define RETA_CONF_SIZE     (ETH_RSS_RETA_SIZE_512 / RTE_RETA_GROUP_SIZE)
+#define RETA_CONF_SIZE     (RTE_ETH_RSS_RETA_SIZE_512 / RTE_ETH_RETA_GROUP_SIZE)
 
 static int
 rss_setup(uint16_t port_id,
@@ -165,11 +165,11 @@ rss_setup(uint16_t port_id,
 	memset(reta_conf, 0, sizeof(reta_conf));
 
 	for (i = 0; i < reta_size; i++)
-		reta_conf[i / RTE_RETA_GROUP_SIZE].mask = UINT64_MAX;
+		reta_conf[i / RTE_ETH_RETA_GROUP_SIZE].mask = UINT64_MAX;
 
 	for (i = 0; i < reta_size; i++) {
-		uint32_t reta_id = i / RTE_RETA_GROUP_SIZE;
-		uint32_t reta_pos = i % RTE_RETA_GROUP_SIZE;
+		uint32_t reta_id = i / RTE_ETH_RETA_GROUP_SIZE;
+		uint32_t reta_pos = i % RTE_ETH_RETA_GROUP_SIZE;
 		uint32_t rss_qs_pos = i % rss->n_queues;
 
 		reta_conf[reta_id].reta[reta_pos] =
@@ -227,7 +227,7 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	rss = params->rx.rss;
 	if (rss) {
 		if ((port_info.reta_size == 0) ||
-			(port_info.reta_size > ETH_RSS_RETA_SIZE_512))
+			(port_info.reta_size > RTE_ETH_RSS_RETA_SIZE_512))
 			return NULL;
 
 		if ((rss->n_queues == 0) ||
@@ -245,9 +245,9 @@ link_create(struct obj *obj, const char *name, struct link_params *params)
 	/* Port */
 	memcpy(&port_conf, &port_conf_default, sizeof(port_conf));
 	if (rss) {
-		port_conf.rxmode.mq_mode = ETH_MQ_RX_RSS;
+		port_conf.rxmode.mq_mode = RTE_ETH_MQ_RX_RSS;
 		port_conf.rx_adv_conf.rss_conf.rss_hf =
-			(ETH_RSS_IP | ETH_RSS_TCP | ETH_RSS_UDP) &
+			(RTE_ETH_RSS_IP | RTE_ETH_RSS_TCP | RTE_ETH_RSS_UDP) &
 			port_info.flow_type_rss_offloads;
 	}
 
@@ -356,7 +356,7 @@ link_is_up(struct obj *obj, const char *name)
 	if (rte_eth_link_get(link->port_id, &link_params) < 0)
 		return 0;
 
-	return (link_params.link_status == ETH_LINK_DOWN) ? 0 : 1;
+	return (link_params.link_status == RTE_ETH_LINK_DOWN) ? 0 : 1;
 }
 
 struct link *
diff --git a/examples/ptpclient/ptpclient.c b/examples/ptpclient/ptpclient.c
index 4f32ade7fbf7..db32b0d6c427 100644
--- a/examples/ptpclient/ptpclient.c
+++ b/examples/ptpclient/ptpclient.c
@@ -197,14 +197,14 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+	if (dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Force full Tx path in the driver, required for IEEE1588 */
-	port_conf.txmode.offloads |= DEV_TX_OFFLOAD_MULTI_SEGS;
+	port_conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/qos_meter/main.c b/examples/qos_meter/main.c
index 7ffccc8369dc..5ef14c176b11 100644
--- a/examples/qos_meter/main.c
+++ b/examples/qos_meter/main.c
@@ -51,19 +51,19 @@ static struct rte_mempool *pool = NULL;
  ***/
 static struct rte_eth_conf port_conf = {
 	.rxmode = {
-		.mq_mode	= ETH_MQ_RX_RSS,
+		.mq_mode	= RTE_ETH_MQ_RX_RSS,
 		.max_rx_pkt_len = RTE_ETHER_MAX_LEN,
 		.split_hdr_size = 0,
-		.offloads = DEV_RX_OFFLOAD_CHECKSUM,
+		.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,
 	},
 	.rx_adv_conf = {
 		.rss_conf = {
 			.rss_key = NULL,
-			.rss_hf = ETH_RSS_IP,
+			.rss_hf = RTE_ETH_RSS_IP,
 		},
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -333,8 +333,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_rx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
@@ -379,8 +379,8 @@ main(int argc, char **argv)
 			"Error during getting device (port %u) info: %s\n",
 			port_tx, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
-		conf.txmode.offloads |= DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
+		conf.txmode.offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	conf.rx_adv_conf.rss_conf.rss_hf &= dev_info.flow_type_rss_offloads;
 	if (conf.rx_adv_conf.rss_conf.rss_hf !=
diff --git a/examples/qos_sched/init.c b/examples/qos_sched/init.c
index 1abe003fc6ae..e750928fb89d 100644
--- a/examples/qos_sched/init.c
+++ b/examples/qos_sched/init.c
@@ -61,7 +61,7 @@ static struct rte_eth_conf port_conf = {
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_DCB_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 };
 
@@ -106,9 +106,9 @@ app_init_port(uint16_t portid, struct rte_mempool *mp)
 			"Error during getting device (port %u) info: %s\n",
 			portid, strerror(-ret));
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		local_port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	ret = rte_eth_dev_configure(portid, 1, 1, &local_port_conf);
 	if (ret < 0)
 		rte_exit(EXIT_FAILURE,
diff --git a/examples/rxtx_callbacks/main.c b/examples/rxtx_callbacks/main.c
index 6f20f98b2b30..08df716dc0fb 100644
--- a/examples/rxtx_callbacks/main.c
+++ b/examples/rxtx_callbacks/main.c
@@ -145,17 +145,17 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	if (hw_timestamping) {
-		if (!(dev_info.rx_offload_capa & DEV_RX_OFFLOAD_TIMESTAMP)) {
+		if (!(dev_info.rx_offload_capa & RTE_ETH_RX_OFFLOAD_TIMESTAMP)) {
 			printf("\nERROR: Port %u does not support hardware timestamping\n"
 					, port);
 			return -1;
 		}
-		port_conf.rxmode.offloads |= DEV_RX_OFFLOAD_TIMESTAMP;
+		port_conf.rxmode.offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
 		rte_mbuf_dyn_rx_timestamp_register(&hwts_dynfield_offset, NULL);
 		if (hwts_dynfield_offset < 0) {
 			printf("ERROR: Failed to register timestamp field\n");
diff --git a/examples/server_node_efd/server/init.c b/examples/server_node_efd/server/init.c
index 9ebd88bac20e..074fee5b26b2 100644
--- a/examples/server_node_efd/server/init.c
+++ b/examples/server_node_efd/server/init.c
@@ -96,7 +96,7 @@ init_port(uint16_t port_num)
 	/* for port configuration all features are off by default */
 	struct rte_eth_conf port_conf = {
 		.rxmode = {
-			.mq_mode = ETH_MQ_RX_RSS,
+			.mq_mode = RTE_ETH_MQ_RX_RSS,
 		},
 	};
 	const uint16_t rx_rings = 1, tx_rings = num_nodes;
@@ -115,9 +115,9 @@ init_port(uint16_t port_num)
 	if (retval != 0)
 		return retval;
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/*
 	 * Standard DPDK port initialisation - config port, then set up
@@ -277,7 +277,7 @@ check_all_ports_link_status(uint16_t port_num, uint32_t port_mask)
 				continue;
 			}
 			/* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/skeleton/basicfwd.c b/examples/skeleton/basicfwd.c
index ae08261befd7..737df4ca2a17 100644
--- a/examples/skeleton/basicfwd.c
+++ b/examples/skeleton/basicfwd.c
@@ -55,9 +55,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
diff --git a/examples/vhost/main.c b/examples/vhost/main.c
index bc3d71c8984e..b1d363ae21db 100644
--- a/examples/vhost/main.c
+++ b/examples/vhost/main.c
@@ -109,23 +109,23 @@ static int nb_sockets;
 /* empty vmdq configuration structure. Filled in programatically */
 static struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 		/*
 		 * VLAN strip is necessary for 1G NIC such as I350,
 		 * this fixes bug of ipv4 forwarding in guest can't
 		 * forward pakets from one virtio dev to another virtio dev.
 		 */
-		.offloads = DEV_RX_OFFLOAD_VLAN_STRIP,
+		.offloads = RTE_ETH_RX_OFFLOAD_VLAN_STRIP,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
-		.offloads = (DEV_TX_OFFLOAD_IPV4_CKSUM |
-			     DEV_TX_OFFLOAD_TCP_CKSUM |
-			     DEV_TX_OFFLOAD_VLAN_INSERT |
-			     DEV_TX_OFFLOAD_MULTI_SEGS |
-			     DEV_TX_OFFLOAD_TCP_TSO),
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
+		.offloads = (RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_TCP_CKSUM |
+			     RTE_ETH_TX_OFFLOAD_VLAN_INSERT |
+			     RTE_ETH_TX_OFFLOAD_MULTI_SEGS |
+			     RTE_ETH_TX_OFFLOAD_TCP_TSO),
 	},
 	.rx_adv_conf = {
 		/*
@@ -133,7 +133,7 @@ static struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -290,9 +290,9 @@ port_init(uint16_t port)
 		return -1;
 
 	rx_rings = (uint16_t)dev_info.max_rx_queues;
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	/* Configure ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
 	if (retval != 0) {
@@ -562,8 +562,8 @@ us_vhost_parse_args(int argc, char **argv)
 		case 'P':
 			promiscuous = 1;
 			vmdq_conf_default.rx_adv_conf.vmdq_rx_conf.rx_mode =
-				ETH_VMDQ_ACCEPT_BROADCAST |
-				ETH_VMDQ_ACCEPT_MULTICAST;
+				RTE_ETH_VMDQ_ACCEPT_BROADCAST |
+				RTE_ETH_VMDQ_ACCEPT_MULTICAST;
 			break;
 
 		case OPT_VM2VM_NUM:
@@ -638,7 +638,7 @@ us_vhost_parse_args(int argc, char **argv)
 			mergeable = !!ret;
 			if (ret) {
 				vmdq_conf_default.rxmode.offloads |=
-					DEV_RX_OFFLOAD_JUMBO_FRAME;
+					RTE_ETH_RX_OFFLOAD_JUMBO_FRAME;
 				vmdq_conf_default.rxmode.max_rx_pkt_len
 					= JUMBO_FRAME_MAX_SIZE;
 			}
diff --git a/examples/vm_power_manager/main.c b/examples/vm_power_manager/main.c
index 7d5bf6855426..dddcde40efe2 100644
--- a/examples/vm_power_manager/main.c
+++ b/examples/vm_power_manager/main.c
@@ -78,9 +78,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	/* Configure the Ethernet device. */
 	retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf);
@@ -278,7 +278,7 @@ check_all_ports_link_status(uint32_t port_mask)
 				continue;
 			}
 		       /* clear all_ports_up flag if any link down */
-			if (link.link_status == ETH_LINK_DOWN) {
+			if (link.link_status == RTE_ETH_LINK_DOWN) {
 				all_ports_up = 0;
 				break;
 			}
diff --git a/examples/vmdq/main.c b/examples/vmdq/main.c
index d3bc19f78ee5..16782a5d850f 100644
--- a/examples/vmdq/main.c
+++ b/examples/vmdq/main.c
@@ -66,12 +66,12 @@ static uint8_t rss_enable;
 /* empty vmdq configuration structure. Filled in programatically */
 static const struct rte_eth_conf vmdq_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_ONLY,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_ONLY,
 		.split_hdr_size = 0,
 	},
 
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_NONE,
+		.mq_mode = RTE_ETH_MQ_TX_NONE,
 	},
 	.rx_adv_conf = {
 		/*
@@ -79,7 +79,7 @@ static const struct rte_eth_conf vmdq_conf_default = {
 		 * appropriate values
 		 */
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_8_POOLS,
+			.nb_queue_pools = RTE_ETH_8_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -157,11 +157,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools)
 	(void)(rte_memcpy(&eth_conf->rx_adv_conf.vmdq_rx_conf, &conf,
 		   sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -259,9 +259,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 	retval = rte_eth_dev_configure(port, rxRings, txRings, &port_conf);
 	if (retval != 0)
 		return retval;
diff --git a/examples/vmdq_dcb/main.c b/examples/vmdq_dcb/main.c
index 685a03bdd194..3677a34da849 100644
--- a/examples/vmdq_dcb/main.c
+++ b/examples/vmdq_dcb/main.c
@@ -60,8 +60,8 @@ static uint16_t ports[RTE_MAX_ETHPORTS];
 static unsigned num_ports;
 
 /* number of pools (if user does not specify any, 32 by default */
-static enum rte_eth_nb_pools num_pools = ETH_32_POOLS;
-static enum rte_eth_nb_tcs   num_tcs   = ETH_4_TCS;
+static enum rte_eth_nb_pools num_pools = RTE_ETH_32_POOLS;
+static enum rte_eth_nb_tcs   num_tcs   = RTE_ETH_4_TCS;
 static uint16_t num_queues, num_vmdq_queues;
 static uint16_t vmdq_pool_base, vmdq_queue_base;
 static uint8_t rss_enable;
@@ -69,11 +69,11 @@ static uint8_t rss_enable;
 /* Empty vmdq+dcb configuration structure. Filled in programmatically. 8< */
 static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	.rxmode = {
-		.mq_mode        = ETH_MQ_RX_VMDQ_DCB,
+		.mq_mode        = RTE_ETH_MQ_RX_VMDQ_DCB,
 		.split_hdr_size = 0,
 	},
 	.txmode = {
-		.mq_mode = ETH_MQ_TX_VMDQ_DCB,
+		.mq_mode = RTE_ETH_MQ_TX_VMDQ_DCB,
 	},
 	/*
 	 * should be overridden separately in code with
@@ -81,7 +81,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	 */
 	.rx_adv_conf = {
 		.vmdq_dcb_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -89,12 +89,12 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 			.dcb_tc = {0},
 		},
 		.dcb_rx_conf = {
-				.nb_tcs = ETH_4_TCS,
+				.nb_tcs = RTE_ETH_4_TCS,
 				/** Traffic class each UP mapped to. */
 				.dcb_tc = {0},
 		},
 		.vmdq_rx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.enable_default_pool = 0,
 			.default_pool = 0,
 			.nb_pool_maps = 0,
@@ -103,7 +103,7 @@ static const struct rte_eth_conf vmdq_dcb_conf_default = {
 	},
 	.tx_adv_conf = {
 		.vmdq_dcb_tx_conf = {
-			.nb_queue_pools = ETH_32_POOLS,
+			.nb_queue_pools = RTE_ETH_32_POOLS,
 			.dcb_tc = {0},
 		},
 	},
@@ -157,7 +157,7 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 		conf.pool_map[i].pools = 1UL << i;
 		vmdq_conf.pool_map[i].pools = 1UL << i;
 	}
-	for (i = 0; i < ETH_DCB_NUM_USER_PRIORITIES; i++){
+	for (i = 0; i < RTE_ETH_DCB_NUM_USER_PRIORITIES; i++) {
 		conf.dcb_tc[i] = i % num_tcs;
 		dcb_conf.dcb_tc[i] = i % num_tcs;
 		tx_conf.dcb_tc[i] = i % num_tcs;
@@ -173,11 +173,11 @@ get_eth_conf(struct rte_eth_conf *eth_conf)
 	(void)(rte_memcpy(&eth_conf->tx_adv_conf.vmdq_dcb_tx_conf, &tx_conf,
 			  sizeof(tx_conf)));
 	if (rss_enable) {
-		eth_conf->rxmode.mq_mode = ETH_MQ_RX_VMDQ_DCB_RSS;
-		eth_conf->rx_adv_conf.rss_conf.rss_hf = ETH_RSS_IP |
-							ETH_RSS_UDP |
-							ETH_RSS_TCP |
-							ETH_RSS_SCTP;
+		eth_conf->rxmode.mq_mode = RTE_ETH_MQ_RX_VMDQ_DCB_RSS;
+		eth_conf->rx_adv_conf.rss_conf.rss_hf = RTE_ETH_RSS_IP |
+							RTE_ETH_RSS_UDP |
+							RTE_ETH_RSS_TCP |
+							RTE_ETH_RSS_SCTP;
 	}
 	return 0;
 }
@@ -271,9 +271,9 @@ port_init(uint16_t port, struct rte_mempool *mbuf_pool)
 		return retval;
 	}
 
-	if (dev_info.tx_offload_capa & DEV_TX_OFFLOAD_MBUF_FAST_FREE)
+	if (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)
 		port_conf.txmode.offloads |=
-			DEV_TX_OFFLOAD_MBUF_FAST_FREE;
+			RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;
 
 	rss_hf_tmp = port_conf.rx_adv_conf.rss_conf.rss_hf;
 	port_conf.rx_adv_conf.rss_conf.rss_hf &=
@@ -390,9 +390,9 @@ vmdq_parse_num_pools(const char *q_arg)
 	if (n != 16 && n != 32)
 		return -1;
 	if (n == 16)
-		num_pools = ETH_16_POOLS;
+		num_pools = RTE_ETH_16_POOLS;
 	else
-		num_pools = ETH_32_POOLS;
+		num_pools = RTE_ETH_32_POOLS;
 
 	return 0;
 }
@@ -412,9 +412,9 @@ vmdq_parse_num_tcs(const char *q_arg)
 	if (n != 4 && n != 8)
 		return -1;
 	if (n == 4)
-		num_tcs = ETH_4_TCS;
+		num_tcs = RTE_ETH_4_TCS;
 	else
-		num_tcs = ETH_8_TCS;
+		num_tcs = RTE_ETH_8_TCS;
 
 	return 0;
 }
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 9d95cd11e1b5..9ccbd7db4063 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -98,9 +98,6 @@ static const struct rte_eth_xstats_name_off eth_dev_txq_stats_strings[] = {
 #define RTE_NB_TXQ_STATS RTE_DIM(eth_dev_txq_stats_strings)
 
 #define RTE_RX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_RX_OFFLOAD_##_name, #_name }
-
-#define RTE_ETH_RX_OFFLOAD_BIT2STR(_name)	\
 	{ RTE_ETH_RX_OFFLOAD_##_name, #_name }
 
 static const struct {
@@ -126,14 +123,14 @@ static const struct {
 	RTE_RX_OFFLOAD_BIT2STR(SCTP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(OUTER_UDP_CKSUM),
 	RTE_RX_OFFLOAD_BIT2STR(RSS_HASH),
-	RTE_ETH_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
+	RTE_RX_OFFLOAD_BIT2STR(BUFFER_SPLIT),
 };
 
 #undef RTE_RX_OFFLOAD_BIT2STR
 #undef RTE_ETH_RX_OFFLOAD_BIT2STR
 
 #define RTE_TX_OFFLOAD_BIT2STR(_name)	\
-	{ DEV_TX_OFFLOAD_##_name, #_name }
+	{ RTE_ETH_TX_OFFLOAD_##_name, #_name }
 
 static const struct {
 	uint64_t offload;
@@ -1184,32 +1181,32 @@ uint32_t
 rte_eth_speed_bitflag(uint32_t speed, int duplex)
 {
 	switch (speed) {
-	case ETH_SPEED_NUM_10M:
-		return duplex ? ETH_LINK_SPEED_10M : ETH_LINK_SPEED_10M_HD;
-	case ETH_SPEED_NUM_100M:
-		return duplex ? ETH_LINK_SPEED_100M : ETH_LINK_SPEED_100M_HD;
-	case ETH_SPEED_NUM_1G:
-		return ETH_LINK_SPEED_1G;
-	case ETH_SPEED_NUM_2_5G:
-		return ETH_LINK_SPEED_2_5G;
-	case ETH_SPEED_NUM_5G:
-		return ETH_LINK_SPEED_5G;
-	case ETH_SPEED_NUM_10G:
-		return ETH_LINK_SPEED_10G;
-	case ETH_SPEED_NUM_20G:
-		return ETH_LINK_SPEED_20G;
-	case ETH_SPEED_NUM_25G:
-		return ETH_LINK_SPEED_25G;
-	case ETH_SPEED_NUM_40G:
-		return ETH_LINK_SPEED_40G;
-	case ETH_SPEED_NUM_50G:
-		return ETH_LINK_SPEED_50G;
-	case ETH_SPEED_NUM_56G:
-		return ETH_LINK_SPEED_56G;
-	case ETH_SPEED_NUM_100G:
-		return ETH_LINK_SPEED_100G;
-	case ETH_SPEED_NUM_200G:
-		return ETH_LINK_SPEED_200G;
+	case RTE_ETH_SPEED_NUM_10M:
+		return duplex ? RTE_ETH_LINK_SPEED_10M : RTE_ETH_LINK_SPEED_10M_HD;
+	case RTE_ETH_SPEED_NUM_100M:
+		return duplex ? RTE_ETH_LINK_SPEED_100M : RTE_ETH_LINK_SPEED_100M_HD;
+	case RTE_ETH_SPEED_NUM_1G:
+		return RTE_ETH_LINK_SPEED_1G;
+	case RTE_ETH_SPEED_NUM_2_5G:
+		return RTE_ETH_LINK_SPEED_2_5G;
+	case RTE_ETH_SPEED_NUM_5G:
+		return RTE_ETH_LINK_SPEED_5G;
+	case RTE_ETH_SPEED_NUM_10G:
+		return RTE_ETH_LINK_SPEED_10G;
+	case RTE_ETH_SPEED_NUM_20G:
+		return RTE_ETH_LINK_SPEED_20G;
+	case RTE_ETH_SPEED_NUM_25G:
+		return RTE_ETH_LINK_SPEED_25G;
+	case RTE_ETH_SPEED_NUM_40G:
+		return RTE_ETH_LINK_SPEED_40G;
+	case RTE_ETH_SPEED_NUM_50G:
+		return RTE_ETH_LINK_SPEED_50G;
+	case RTE_ETH_SPEED_NUM_56G:
+		return RTE_ETH_LINK_SPEED_56G;
+	case RTE_ETH_SPEED_NUM_100G:
+		return RTE_ETH_LINK_SPEED_100G;
+	case RTE_ETH_SPEED_NUM_200G:
+		return RTE_ETH_LINK_SPEED_200G;
 	default:
 		return 0;
 	}
@@ -1458,7 +1455,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If jumbo frames are enabled, check that the maximum RX packet
 	 * length is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_JUMBO_FRAME) {
 		if (dev_conf->rxmode.max_rx_pkt_len > dev_info.max_rx_pktlen) {
 			RTE_ETHDEV_LOG(ERR,
 				"Ethdev port_id=%u max_rx_pkt_len %u > max valid value %u\n",
@@ -1491,7 +1488,7 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If LRO is enabled, check that the maximum aggregated packet
 	 * size is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (dev_conf->rxmode.max_lro_pkt_size == 0)
 			dev->data->dev_conf.rxmode.max_lro_pkt_size =
 				dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -1543,12 +1540,12 @@ rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	}
 
 	/* Check if Rx RSS distribution is disabled but RSS hash is enabled. */
-	if (((dev_conf->rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) == 0) &&
-	    (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) {
+	if (((dev_conf->rxmode.mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) == 0) &&
+	    (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH)) {
 		RTE_ETHDEV_LOG(ERR,
 			"Ethdev port_id=%u config invalid Rx mq_mode without RSS but %s offload is requested\n",
 			port_id,
-			rte_eth_dev_rx_offload_name(DEV_RX_OFFLOAD_RSS_HASH));
+			rte_eth_dev_rx_offload_name(RTE_ETH_RX_OFFLOAD_RSS_HASH));
 		ret = -EINVAL;
 		goto rollback;
 	}
@@ -2157,7 +2154,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
 	 * If LRO is enabled, check that the maximum aggregated packet
 	 * size is supported by the configured device.
 	 */
-	if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+	if (local_conf.offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) {
 		if (dev->data->dev_conf.rxmode.max_lro_pkt_size == 0)
 			dev->data->dev_conf.rxmode.max_lro_pkt_size =
 				dev->data->dev_conf.rxmode.max_rx_pkt_len;
@@ -2752,21 +2749,21 @@ const char *
 rte_eth_link_speed_to_str(uint32_t link_speed)
 {
 	switch (link_speed) {
-	case ETH_SPEED_NUM_NONE: return "None";
-	case ETH_SPEED_NUM_10M:  return "10 Mbps";
-	case ETH_SPEED_NUM_100M: return "100 Mbps";
-	case ETH_SPEED_NUM_1G:   return "1 Gbps";
-	case ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
-	case ETH_SPEED_NUM_5G:   return "5 Gbps";
-	case ETH_SPEED_NUM_10G:  return "10 Gbps";
-	case ETH_SPEED_NUM_20G:  return "20 Gbps";
-	case ETH_SPEED_NUM_25G:  return "25 Gbps";
-	case ETH_SPEED_NUM_40G:  return "40 Gbps";
-	case ETH_SPEED_NUM_50G:  return "50 Gbps";
-	case ETH_SPEED_NUM_56G:  return "56 Gbps";
-	case ETH_SPEED_NUM_100G: return "100 Gbps";
-	case ETH_SPEED_NUM_200G: return "200 Gbps";
-	case ETH_SPEED_NUM_UNKNOWN: return "Unknown";
+	case RTE_ETH_SPEED_NUM_NONE: return "None";
+	case RTE_ETH_SPEED_NUM_10M:  return "10 Mbps";
+	case RTE_ETH_SPEED_NUM_100M: return "100 Mbps";
+	case RTE_ETH_SPEED_NUM_1G:   return "1 Gbps";
+	case RTE_ETH_SPEED_NUM_2_5G: return "2.5 Gbps";
+	case RTE_ETH_SPEED_NUM_5G:   return "5 Gbps";
+	case RTE_ETH_SPEED_NUM_10G:  return "10 Gbps";
+	case RTE_ETH_SPEED_NUM_20G:  return "20 Gbps";
+	case RTE_ETH_SPEED_NUM_25G:  return "25 Gbps";
+	case RTE_ETH_SPEED_NUM_40G:  return "40 Gbps";
+	case RTE_ETH_SPEED_NUM_50G:  return "50 Gbps";
+	case RTE_ETH_SPEED_NUM_56G:  return "56 Gbps";
+	case RTE_ETH_SPEED_NUM_100G: return "100 Gbps";
+	case RTE_ETH_SPEED_NUM_200G: return "200 Gbps";
+	case RTE_ETH_SPEED_NUM_UNKNOWN: return "Unknown";
 	default: return "Invalid";
 	}
 }
@@ -2790,14 +2787,14 @@ rte_eth_link_to_str(char *str, size_t len, const struct rte_eth_link *eth_link)
 		return -EINVAL;
 	}
 
-	if (eth_link->link_status == ETH_LINK_DOWN)
+	if (eth_link->link_status == RTE_ETH_LINK_DOWN)
 		return snprintf(str, len, "Link down");
 	else
 		return snprintf(str, len, "Link up at %s %s %s",
 			rte_eth_link_speed_to_str(eth_link->link_speed),
-			(eth_link->link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(eth_link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 			"FDX" : "HDX",
-			(eth_link->link_autoneg == ETH_LINK_AUTONEG) ?
+			(eth_link->link_autoneg == RTE_ETH_LINK_AUTONEG) ?
 			"Autoneg" : "Fixed");
 }
 
@@ -3663,7 +3660,7 @@ rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on)
 	dev = &rte_eth_devices[port_id];
 
 	if (!(dev->data->dev_conf.rxmode.offloads &
-	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
+	      RTE_ETH_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_ETHDEV_LOG(ERR, "Port %u: vlan-filtering disabled\n",
 			port_id);
 		return -ENOSYS;
@@ -3750,44 +3747,44 @@ rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask)
 	dev_offloads = orig_offloads;
 
 	/* check which option changed by application */
-	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP);
+	cur = !!(offload_mask & RTE_ETH_VLAN_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_STRIP;
-		mask |= ETH_VLAN_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
+		mask |= RTE_ETH_VLAN_STRIP_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER);
+	cur = !!(offload_mask & RTE_ETH_VLAN_FILTER_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_FILTER;
-		mask |= ETH_VLAN_FILTER_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
+		mask |= RTE_ETH_VLAN_FILTER_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND);
+	cur = !!(offload_mask & RTE_ETH_VLAN_EXTEND_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_VLAN_EXTEND;
-		mask |= ETH_VLAN_EXTEND_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
+		mask |= RTE_ETH_VLAN_EXTEND_MASK;
 	}
 
-	cur = !!(offload_mask & ETH_QINQ_STRIP_OFFLOAD);
-	org = !!(dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP);
+	cur = !!(offload_mask & RTE_ETH_QINQ_STRIP_OFFLOAD);
+	org = !!(dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP);
 	if (cur != org) {
 		if (cur)
-			dev_offloads |= DEV_RX_OFFLOAD_QINQ_STRIP;
+			dev_offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
 		else
-			dev_offloads &= ~DEV_RX_OFFLOAD_QINQ_STRIP;
-		mask |= ETH_QINQ_STRIP_MASK;
+			dev_offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
+		mask |= RTE_ETH_QINQ_STRIP_MASK;
 	}
 
 	/*no change*/
@@ -3832,17 +3829,17 @@ rte_eth_dev_get_vlan_offload(uint16_t port_id)
 	dev = &rte_eth_devices[port_id];
 	dev_offloads = &dev->data->dev_conf.rxmode.offloads;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
-		ret |= ETH_VLAN_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
+		ret |= RTE_ETH_VLAN_STRIP_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
-		ret |= ETH_VLAN_FILTER_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
+		ret |= RTE_ETH_VLAN_FILTER_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
-		ret |= ETH_VLAN_EXTEND_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_VLAN_EXTEND)
+		ret |= RTE_ETH_VLAN_EXTEND_OFFLOAD;
 
-	if (*dev_offloads & DEV_RX_OFFLOAD_QINQ_STRIP)
-		ret |= ETH_QINQ_STRIP_OFFLOAD;
+	if (*dev_offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+		ret |= RTE_ETH_QINQ_STRIP_OFFLOAD;
 
 	return ret;
 }
@@ -3919,7 +3916,7 @@ rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (pfc_conf->priority > (ETH_DCB_NUM_USER_PRIORITIES - 1)) {
+	if (pfc_conf->priority > (RTE_ETH_DCB_NUM_USER_PRIORITIES - 1)) {
 		RTE_ETHDEV_LOG(ERR, "Invalid priority, only 0-7 allowed\n");
 		return -EINVAL;
 	}
@@ -3937,7 +3934,7 @@ eth_check_reta_mask(struct rte_eth_rss_reta_entry64 *reta_conf,
 {
 	uint16_t i, num;
 
-	num = (reta_size + RTE_RETA_GROUP_SIZE - 1) / RTE_RETA_GROUP_SIZE;
+	num = (reta_size + RTE_ETH_RETA_GROUP_SIZE - 1) / RTE_ETH_RETA_GROUP_SIZE;
 	for (i = 0; i < num; i++) {
 		if (reta_conf[i].mask)
 			return 0;
@@ -3959,8 +3956,8 @@ eth_check_reta_entry(struct rte_eth_rss_reta_entry64 *reta_conf,
 	}
 
 	for (i = 0; i < reta_size; i++) {
-		idx = i / RTE_RETA_GROUP_SIZE;
-		shift = i % RTE_RETA_GROUP_SIZE;
+		idx = i / RTE_ETH_RETA_GROUP_SIZE;
+		shift = i % RTE_ETH_RETA_GROUP_SIZE;
 		if ((reta_conf[idx].mask & (1ULL << shift)) &&
 			(reta_conf[idx].reta[shift] >= max_rxq)) {
 			RTE_ETHDEV_LOG(ERR,
@@ -4116,7 +4113,7 @@ rte_eth_dev_udp_tunnel_port_add(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4142,7 +4139,7 @@ rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (udp_tunnel->prot_type >= RTE_TUNNEL_TYPE_MAX) {
+	if (udp_tunnel->prot_type >= RTE_ETH_TUNNEL_TYPE_MAX) {
 		RTE_ETHDEV_LOG(ERR, "Invalid tunnel type\n");
 		return -EINVAL;
 	}
@@ -4283,8 +4280,8 @@ rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *addr,
 			port_id);
 		return -EINVAL;
 	}
-	if (pool >= ETH_64_POOLS) {
-		RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", ETH_64_POOLS - 1);
+	if (pool >= RTE_ETH_64_POOLS) {
+		RTE_ETHDEV_LOG(ERR, "Pool id must be 0-%d\n", RTE_ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
@@ -4548,21 +4545,21 @@ rte_eth_mirror_rule_set(uint16_t port_id,
 		return -EINVAL;
 	}
 
-	if (mirror_conf->dst_pool >= ETH_64_POOLS) {
+	if (mirror_conf->dst_pool >= RTE_ETH_64_POOLS) {
 		RTE_ETHDEV_LOG(ERR, "Invalid dst pool, pool id must be 0-%d\n",
-			ETH_64_POOLS - 1);
+			RTE_ETH_64_POOLS - 1);
 		return -EINVAL;
 	}
 
-	if ((mirror_conf->rule_type & (ETH_MIRROR_VIRTUAL_POOL_UP |
-	     ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
+	if ((mirror_conf->rule_type & (RTE_ETH_MIRROR_VIRTUAL_POOL_UP |
+	     RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN)) &&
 	    (mirror_conf->pool_mask == 0)) {
 		RTE_ETHDEV_LOG(ERR,
 			"Invalid mirror pool, pool mask can not be 0\n");
 		return -EINVAL;
 	}
 
-	if ((mirror_conf->rule_type & ETH_MIRROR_VLAN) &&
+	if ((mirror_conf->rule_type & RTE_ETH_MIRROR_VLAN) &&
 	    mirror_conf->vlan.vlan_mask == 0) {
 		RTE_ETHDEV_LOG(ERR,
 			"Invalid vlan mask, vlan mask can not be 0\n");
@@ -6238,7 +6235,7 @@ eth_dev_handle_port_link_status(const char *cmd __rte_unused,
 	rte_tel_data_add_dict_string(d, status_str, "UP");
 	rte_tel_data_add_dict_u64(d, "speed", link.link_speed);
 	rte_tel_data_add_dict_string(d, "duplex",
-			(link.link_duplex == ETH_LINK_FULL_DUPLEX) ?
+			(link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX) ?
 				"full-duplex" : "half-duplex");
 	return 0;
 }
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index d2b27c351fdb..cabfe452c808 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -249,7 +249,7 @@ void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter);
  * field is not supported, its value is 0.
  * All byte-related statistics do not include Ethernet FCS regardless
  * of whether these bytes have been delivered to the application
- * (see DEV_RX_OFFLOAD_KEEP_CRC).
+ * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC).
  */
 struct rte_eth_stats {
 	uint64_t ipackets;  /**< Total number of successfully received packets. */
@@ -279,61 +279,99 @@ struct rte_eth_stats {
 /**
  * Device supported speeds bitmap flags
  */
-#define ETH_LINK_SPEED_AUTONEG  (0 <<  0)  /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_FIXED    (1 <<  0)  /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_10M_HD   (1 <<  1)  /**<  10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M      (1 <<  2)  /**<  10 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M_HD  (1 <<  3)  /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M     (1 <<  4)  /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_1G       (1 <<  5)  /**<   1 Gbps */
-#define ETH_LINK_SPEED_2_5G     (1 <<  6)  /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_5G       (1 <<  7)  /**<   5 Gbps */
-#define ETH_LINK_SPEED_10G      (1 <<  8)  /**<  10 Gbps */
-#define ETH_LINK_SPEED_20G      (1 <<  9)  /**<  20 Gbps */
-#define ETH_LINK_SPEED_25G      (1 << 10)  /**<  25 Gbps */
-#define ETH_LINK_SPEED_40G      (1 << 11)  /**<  40 Gbps */
-#define ETH_LINK_SPEED_50G      (1 << 12)  /**<  50 Gbps */
-#define ETH_LINK_SPEED_56G      (1 << 13)  /**<  56 Gbps */
-#define ETH_LINK_SPEED_100G     (1 << 14)  /**< 100 Gbps */
-#define ETH_LINK_SPEED_200G     (1 << 15)  /**< 200 Gbps */
+#define RTE_ETH_LINK_SPEED_AUTONEG  (0 <<  0)  /**< Autonegotiate (all speeds) */
+#define ETH_LINK_SPEED_AUTONEG	RTE_ETH_LINK_SPEED_AUTONEG
+#define RTE_ETH_LINK_SPEED_FIXED    (1 <<  0)  /**< Disable autoneg (fixed speed) */
+#define ETH_LINK_SPEED_FIXED	RTE_ETH_LINK_SPEED_FIXED
+#define RTE_ETH_LINK_SPEED_10M_HD   (1 <<  1)  /**<  10 Mbps half-duplex */
+#define ETH_LINK_SPEED_10M_HD	RTE_ETH_LINK_SPEED_10M_HD
+#define RTE_ETH_LINK_SPEED_10M      (1 <<  2)  /**<  10 Mbps full-duplex */
+#define ETH_LINK_SPEED_10M	RTE_ETH_LINK_SPEED_10M
+#define RTE_ETH_LINK_SPEED_100M_HD  (1 <<  3)  /**< 100 Mbps half-duplex */
+#define ETH_LINK_SPEED_100M_HD	RTE_ETH_LINK_SPEED_100M_HD
+#define RTE_ETH_LINK_SPEED_100M     (1 <<  4)  /**< 100 Mbps full-duplex */
+#define ETH_LINK_SPEED_100M	RTE_ETH_LINK_SPEED_100M
+#define RTE_ETH_LINK_SPEED_1G       (1 <<  5)  /**<   1 Gbps */
+#define ETH_LINK_SPEED_1G	RTE_ETH_LINK_SPEED_1G
+#define RTE_ETH_LINK_SPEED_2_5G     (1 <<  6)  /**< 2.5 Gbps */
+#define ETH_LINK_SPEED_2_5G	RTE_ETH_LINK_SPEED_2_5G
+#define RTE_ETH_LINK_SPEED_5G       (1 <<  7)  /**<   5 Gbps */
+#define ETH_LINK_SPEED_5G	RTE_ETH_LINK_SPEED_5G
+#define RTE_ETH_LINK_SPEED_10G      (1 <<  8)  /**<  10 Gbps */
+#define ETH_LINK_SPEED_10G	RTE_ETH_LINK_SPEED_10G
+#define RTE_ETH_LINK_SPEED_20G      (1 <<  9)  /**<  20 Gbps */
+#define ETH_LINK_SPEED_20G	RTE_ETH_LINK_SPEED_20G
+#define RTE_ETH_LINK_SPEED_25G      (1 << 10)  /**<  25 Gbps */
+#define ETH_LINK_SPEED_25G	RTE_ETH_LINK_SPEED_25G
+#define RTE_ETH_LINK_SPEED_40G      (1 << 11)  /**<  40 Gbps */
+#define ETH_LINK_SPEED_40G	RTE_ETH_LINK_SPEED_40G
+#define RTE_ETH_LINK_SPEED_50G      (1 << 12)  /**<  50 Gbps */
+#define ETH_LINK_SPEED_50G	RTE_ETH_LINK_SPEED_50G
+#define RTE_ETH_LINK_SPEED_56G      (1 << 13)  /**<  56 Gbps */
+#define ETH_LINK_SPEED_56G	RTE_ETH_LINK_SPEED_56G
+#define RTE_ETH_LINK_SPEED_100G     (1 << 14)  /**< 100 Gbps */
+#define ETH_LINK_SPEED_100G	RTE_ETH_LINK_SPEED_100G
+#define RTE_ETH_LINK_SPEED_200G     (1 << 15)  /**< 200 Gbps */
+#define ETH_LINK_SPEED_200G	RTE_ETH_LINK_SPEED_200G
 
 /**
  * Ethernet numeric link speeds in Mbps
  */
-#define ETH_SPEED_NUM_NONE         0 /**< Not defined */
-#define ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
-#define ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
-#define ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
-#define ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
-#define ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
-#define ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
-#define ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
-#define ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
-#define ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
-#define ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define RTE_ETH_SPEED_NUM_NONE         0 /**< Not defined */
+#define ETH_SPEED_NUM_NONE	RTE_ETH_SPEED_NUM_NONE
+#define RTE_ETH_SPEED_NUM_10M         10 /**<  10 Mbps */
+#define ETH_SPEED_NUM_10M	RTE_ETH_SPEED_NUM_10M
+#define RTE_ETH_SPEED_NUM_100M       100 /**< 100 Mbps */
+#define ETH_SPEED_NUM_100M	RTE_ETH_SPEED_NUM_100M
+#define RTE_ETH_SPEED_NUM_1G        1000 /**<   1 Gbps */
+#define ETH_SPEED_NUM_1G	RTE_ETH_SPEED_NUM_1G
+#define RTE_ETH_SPEED_NUM_2_5G      2500 /**< 2.5 Gbps */
+#define ETH_SPEED_NUM_2_5G	RTE_ETH_SPEED_NUM_2_5G
+#define RTE_ETH_SPEED_NUM_5G        5000 /**<   5 Gbps */
+#define ETH_SPEED_NUM_5G	RTE_ETH_SPEED_NUM_5G
+#define RTE_ETH_SPEED_NUM_10G      10000 /**<  10 Gbps */
+#define ETH_SPEED_NUM_10G	RTE_ETH_SPEED_NUM_10G
+#define RTE_ETH_SPEED_NUM_20G      20000 /**<  20 Gbps */
+#define ETH_SPEED_NUM_20G	RTE_ETH_SPEED_NUM_20G
+#define RTE_ETH_SPEED_NUM_25G      25000 /**<  25 Gbps */
+#define ETH_SPEED_NUM_25G	RTE_ETH_SPEED_NUM_25G
+#define RTE_ETH_SPEED_NUM_40G      40000 /**<  40 Gbps */
+#define ETH_SPEED_NUM_40G	RTE_ETH_SPEED_NUM_40G
+#define RTE_ETH_SPEED_NUM_50G      50000 /**<  50 Gbps */
+#define ETH_SPEED_NUM_50G	RTE_ETH_SPEED_NUM_50G
+#define RTE_ETH_SPEED_NUM_56G      56000 /**<  56 Gbps */
+#define ETH_SPEED_NUM_56G	RTE_ETH_SPEED_NUM_56G
+#define RTE_ETH_SPEED_NUM_100G    100000 /**< 100 Gbps */
+#define ETH_SPEED_NUM_100G	RTE_ETH_SPEED_NUM_100G
+#define RTE_ETH_SPEED_NUM_200G    200000 /**< 200 Gbps */
+#define ETH_SPEED_NUM_200G	RTE_ETH_SPEED_NUM_200G
+#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
+#define ETH_SPEED_NUM_UNKNOWN	RTE_ETH_SPEED_NUM_UNKNOWN
 
 /**
  * A structure used to retrieve link-level information of an Ethernet port.
  */
 __extension__
 struct rte_eth_link {
-	uint32_t link_speed;        /**< ETH_SPEED_NUM_ */
-	uint16_t link_duplex  : 1;  /**< ETH_LINK_[HALF/FULL]_DUPLEX */
-	uint16_t link_autoneg : 1;  /**< ETH_LINK_[AUTONEG/FIXED] */
-	uint16_t link_status  : 1;  /**< ETH_LINK_[DOWN/UP] */
+	uint32_t link_speed;        /**< RTE_ETH_SPEED_NUM_ */
+	uint16_t link_duplex  : 1;  /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */
+	uint16_t link_autoneg : 1;  /**< RTE_ETH_LINK_[AUTONEG/FIXED] */
+	uint16_t link_status  : 1;  /**< RTE_ETH_LINK_[DOWN/UP] */
 } __rte_aligned(8);      /**< aligned for atomic64 read/write */
 
 /* Utility constants */
-#define ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
-#define ETH_LINK_UP          1 /**< Link is up (see link_status). */
-#define ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
+#define ETH_LINK_HALF_DUPLEX	RTE_ETH_LINK_HALF_DUPLEX
+#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
+#define ETH_LINK_FULL_DUPLEX	RTE_ETH_LINK_FULL_DUPLEX
+#define RTE_ETH_LINK_DOWN        0 /**< Link is down (see link_status). */
+#define ETH_LINK_DOWN		RTE_ETH_LINK_DOWN
+#define RTE_ETH_LINK_UP          1 /**< Link is up (see link_status). */
+#define ETH_LINK_UP		RTE_ETH_LINK_UP
+#define RTE_ETH_LINK_FIXED       0 /**< No autonegotiation (see link_autoneg). */
+#define ETH_LINK_FIXED		RTE_ETH_LINK_FIXED
+#define RTE_ETH_LINK_AUTONEG     1 /**< Autonegotiated (see link_autoneg). */
+#define ETH_LINK_AUTONEG	RTE_ETH_LINK_AUTONEG
 #define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
 
 /**
@@ -349,9 +387,12 @@ struct rte_eth_thresh {
 /**
  *  Simple flags are used for rte_eth_conf.rxmode.mq_mode.
  */
-#define ETH_MQ_RX_RSS_FLAG  0x1
-#define ETH_MQ_RX_DCB_FLAG  0x2
-#define ETH_MQ_RX_VMDQ_FLAG 0x4
+#define RTE_ETH_MQ_RX_RSS_FLAG  0x1
+#define ETH_MQ_RX_RSS_FLAG	RTE_ETH_MQ_RX_RSS_FLAG
+#define RTE_ETH_MQ_RX_DCB_FLAG  0x2
+#define ETH_MQ_RX_DCB_FLAG	RTE_ETH_MQ_RX_DCB_FLAG
+#define RTE_ETH_MQ_RX_VMDQ_FLAG 0x4
+#define ETH_MQ_RX_VMDQ_FLAG	RTE_ETH_MQ_RX_VMDQ_FLAG
 
 /**
  *  A set of values to identify what method is to be used to route
@@ -359,50 +400,49 @@ struct rte_eth_thresh {
  */
 enum rte_eth_rx_mq_mode {
 	/** None of DCB,RSS or VMDQ mode */
-	ETH_MQ_RX_NONE = 0,
+	RTE_ETH_MQ_RX_NONE = 0,
 
 	/** For RX side, only RSS is on */
-	ETH_MQ_RX_RSS = ETH_MQ_RX_RSS_FLAG,
+	RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG,
 	/** For RX side,only DCB is on. */
-	ETH_MQ_RX_DCB = ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Both DCB and RSS enable */
-	ETH_MQ_RX_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 
 	/** Only VMDQ, no RSS nor DCB */
-	ETH_MQ_RX_VMDQ_ONLY = ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** RSS mode with VMDQ */
-	ETH_MQ_RX_VMDQ_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG,
 	/** Use VMDQ+DCB to route traffic to queues */
-	ETH_MQ_RX_VMDQ_DCB = ETH_MQ_RX_VMDQ_FLAG | ETH_MQ_RX_DCB_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG,
 	/** Enable both VMDQ and DCB in VMDq */
-	ETH_MQ_RX_VMDQ_DCB_RSS = ETH_MQ_RX_RSS_FLAG | ETH_MQ_RX_DCB_FLAG |
-				 ETH_MQ_RX_VMDQ_FLAG,
+	RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG |
+				 RTE_ETH_MQ_RX_VMDQ_FLAG,
 };
 
-/**
- * for rx mq mode backward compatible
- */
-#define ETH_RSS                       ETH_MQ_RX_RSS
-#define VMDQ_DCB                      ETH_MQ_RX_VMDQ_DCB
-#define ETH_DCB_RX                    ETH_MQ_RX_DCB
+#define ETH_MQ_RX_NONE		RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS		RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB		RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS	RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY	RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS	RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB	RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS	RTE_ETH_MQ_RX_VMDQ_DCB_RSS
 
 /**
  * A set of values to identify what method is to be used to transmit
  * packets using multi-TCs.
  */
 enum rte_eth_tx_mq_mode {
-	ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
-	ETH_MQ_TX_DCB,          /**< For TX side,only DCB is on. */
-	ETH_MQ_TX_VMDQ_DCB,	/**< For TX side,both DCB and VT is on. */
-	ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
+	RTE_ETH_MQ_TX_NONE    = 0,  /**< It is in neither DCB nor VT mode. */
+	RTE_ETH_MQ_TX_DCB,          /**< For TX side,only DCB is on. */
+	RTE_ETH_MQ_TX_VMDQ_DCB,	/**< For TX side,both DCB and VT is on. */
+	RTE_ETH_MQ_TX_VMDQ_ONLY,    /**< Only VT on, no DCB */
 };
-
-/**
- * for tx mq mode backward compatible
- */
-#define ETH_DCB_NONE                ETH_MQ_TX_NONE
-#define ETH_VMDQ_DCB_TX             ETH_MQ_TX_VMDQ_DCB
-#define ETH_DCB_TX                  ETH_MQ_TX_DCB
+#define ETH_MQ_TX_NONE		RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB		RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB	RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY	RTE_ETH_MQ_TX_VMDQ_ONLY
 
 /**
  * A structure used to configure the RX features of an Ethernet port.
@@ -415,7 +455,7 @@ struct rte_eth_rxmode {
 	uint32_t max_lro_pkt_size;
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
 	/**
-	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -430,12 +470,17 @@ struct rte_eth_rxmode {
  * Note that single VLAN is treated the same as inner VLAN.
  */
 enum rte_vlan_type {
-	ETH_VLAN_TYPE_UNKNOWN = 0,
-	ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
-	ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
-	ETH_VLAN_TYPE_MAX,
+	RTE_ETH_VLAN_TYPE_UNKNOWN = 0,
+	RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */
+	RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */
+	RTE_ETH_VLAN_TYPE_MAX,
 };
 
+#define ETH_VLAN_TYPE_UNKNOWN	RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER	RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER	RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX	RTE_ETH_VLAN_TYPE_MAX
+
 /**
  * A structure used to describe a vlan filter.
  * If the bit corresponding to a VID is set, such VID is on.
@@ -506,59 +551,96 @@ struct rte_eth_rss_conf {
  * Below macros are defined for RSS offload types, they can be used to
  * fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
  */
-#define ETH_RSS_IPV4               (1ULL << 2)
-#define ETH_RSS_FRAG_IPV4          (1ULL << 3)
-#define ETH_RSS_NONFRAG_IPV4_TCP   (1ULL << 4)
-#define ETH_RSS_NONFRAG_IPV4_UDP   (1ULL << 5)
-#define ETH_RSS_NONFRAG_IPV4_SCTP  (1ULL << 6)
-#define ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
-#define ETH_RSS_IPV6               (1ULL << 8)
-#define ETH_RSS_FRAG_IPV6          (1ULL << 9)
-#define ETH_RSS_NONFRAG_IPV6_TCP   (1ULL << 10)
-#define ETH_RSS_NONFRAG_IPV6_UDP   (1ULL << 11)
-#define ETH_RSS_NONFRAG_IPV6_SCTP  (1ULL << 12)
-#define ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
-#define ETH_RSS_L2_PAYLOAD         (1ULL << 14)
-#define ETH_RSS_IPV6_EX            (1ULL << 15)
-#define ETH_RSS_IPV6_TCP_EX        (1ULL << 16)
-#define ETH_RSS_IPV6_UDP_EX        (1ULL << 17)
-#define ETH_RSS_PORT               (1ULL << 18)
-#define ETH_RSS_VXLAN              (1ULL << 19)
-#define ETH_RSS_GENEVE             (1ULL << 20)
-#define ETH_RSS_NVGRE              (1ULL << 21)
-#define ETH_RSS_GTPU               (1ULL << 23)
-#define ETH_RSS_ETH                (1ULL << 24)
-#define ETH_RSS_S_VLAN             (1ULL << 25)
-#define ETH_RSS_C_VLAN             (1ULL << 26)
-#define ETH_RSS_ESP                (1ULL << 27)
-#define ETH_RSS_AH                 (1ULL << 28)
-#define ETH_RSS_L2TPV3             (1ULL << 29)
-#define ETH_RSS_PFCP               (1ULL << 30)
-#define ETH_RSS_PPPOE		   (1ULL << 31)
-#define ETH_RSS_ECPRI		   (1ULL << 32)
-#define ETH_RSS_MPLS		   (1ULL << 33)
+#define RTE_ETH_RSS_IPV4               (1ULL << 2)
+#define ETH_RSS_IPV4		RTE_ETH_RSS_IPV4
+#define RTE_ETH_RSS_FRAG_IPV4          (1ULL << 3)
+#define ETH_RSS_FRAG_IPV4	RTE_ETH_RSS_FRAG_IPV4
+#define RTE_ETH_RSS_NONFRAG_IPV4_TCP   (1ULL << 4)
+#define ETH_RSS_NONFRAG_IPV4_TCP	RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV4_UDP   (1ULL << 5)
+#define ETH_RSS_NONFRAG_IPV4_UDP	RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP  (1ULL << 6)
+#define ETH_RSS_NONFRAG_IPV4_SCTP	RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER (1ULL << 7)
+#define ETH_RSS_NONFRAG_IPV4_OTHER	RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define RTE_ETH_RSS_IPV6               (1ULL << 8)
+#define ETH_RSS_IPV6		RTE_ETH_RSS_IPV6
+#define RTE_ETH_RSS_FRAG_IPV6          (1ULL << 9)
+#define ETH_RSS_FRAG_IPV6	RTE_ETH_RSS_FRAG_IPV6
+#define RTE_ETH_RSS_NONFRAG_IPV6_TCP   (1ULL << 10)
+#define ETH_RSS_NONFRAG_IPV6_TCP	RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define RTE_ETH_RSS_NONFRAG_IPV6_UDP   (1ULL << 11)
+#define ETH_RSS_NONFRAG_IPV6_UDP	RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP  (1ULL << 12)
+#define ETH_RSS_NONFRAG_IPV6_SCTP	RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER (1ULL << 13)
+#define ETH_RSS_NONFRAG_IPV6_OTHER	RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define RTE_ETH_RSS_L2_PAYLOAD         (1ULL << 14)
+#define ETH_RSS_L2_PAYLOAD	RTE_ETH_RSS_L2_PAYLOAD
+#define RTE_ETH_RSS_IPV6_EX            (1ULL << 15)
+#define ETH_RSS_IPV6_EX		RTE_ETH_RSS_IPV6_EX
+#define RTE_ETH_RSS_IPV6_TCP_EX        (1ULL << 16)
+#define ETH_RSS_IPV6_TCP_EX	RTE_ETH_RSS_IPV6_TCP_EX
+#define RTE_ETH_RSS_IPV6_UDP_EX        (1ULL << 17)
+#define ETH_RSS_IPV6_UDP_EX	RTE_ETH_RSS_IPV6_UDP_EX
+#define RTE_ETH_RSS_PORT               (1ULL << 18)
+#define ETH_RSS_PORT		RTE_ETH_RSS_PORT
+#define RTE_ETH_RSS_VXLAN              (1ULL << 19)
+#define ETH_RSS_VXLAN		RTE_ETH_RSS_VXLAN
+#define RTE_ETH_RSS_GENEVE             (1ULL << 20)
+#define ETH_RSS_GENEVE		RTE_ETH_RSS_GENEVE
+#define RTE_ETH_RSS_NVGRE              (1ULL << 21)
+#define ETH_RSS_NVGRE		RTE_ETH_RSS_NVGRE
+#define RTE_ETH_RSS_GTPU               (1ULL << 23)
+#define ETH_RSS_GTPU		RTE_ETH_RSS_GTPU
+#define RTE_ETH_RSS_ETH                (1ULL << 24)
+#define ETH_RSS_ETH		RTE_ETH_RSS_ETH
+#define RTE_ETH_RSS_S_VLAN             (1ULL << 25)
+#define ETH_RSS_S_VLAN		RTE_ETH_RSS_S_VLAN
+#define RTE_ETH_RSS_C_VLAN             (1ULL << 26)
+#define ETH_RSS_C_VLAN		RTE_ETH_RSS_C_VLAN
+#define RTE_ETH_RSS_ESP                (1ULL << 27)
+#define ETH_RSS_ESP		RTE_ETH_RSS_ESP
+#define RTE_ETH_RSS_AH                 (1ULL << 28)
+#define ETH_RSS_AH		RTE_ETH_RSS_AH
+#define RTE_ETH_RSS_L2TPV3             (1ULL << 29)
+#define ETH_RSS_L2TPV3		RTE_ETH_RSS_L2TPV3
+#define RTE_ETH_RSS_PFCP               (1ULL << 30)
+#define ETH_RSS_PFCP		RTE_ETH_RSS_PFCP
+#define RTE_ETH_RSS_PPPOE              (1ULL << 31)
+#define ETH_RSS_PPPOE		RTE_ETH_RSS_PPPOE
+#define RTE_ETH_RSS_ECPRI              (1ULL << 32)
+#define ETH_RSS_ECPRI		RTE_ETH_RSS_ECPRI
+#define RTE_ETH_RSS_MPLS               (1ULL << 33)
+#define ETH_RSS_MPLS		RTE_ETH_RSS_MPLS
 
 /*
- * We use the following macros to combine with above ETH_RSS_* for
+ * We use the following macros to combine with above RTE_ETH_RSS_* for
  * more specific input set selection. These bits are defined starting
  * from the high end of the 64 bits.
- * Note: If we use above ETH_RSS_* without SRC/DST_ONLY, it represents
+ * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents
  * both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of
  * the same level are used simultaneously, it is the same case as none of
  * them are added.
  */
-#define ETH_RSS_L3_SRC_ONLY        (1ULL << 63)
-#define ETH_RSS_L3_DST_ONLY        (1ULL << 62)
-#define ETH_RSS_L4_SRC_ONLY        (1ULL << 61)
-#define ETH_RSS_L4_DST_ONLY        (1ULL << 60)
-#define ETH_RSS_L2_SRC_ONLY        (1ULL << 59)
-#define ETH_RSS_L2_DST_ONLY        (1ULL << 58)
+#define RTE_ETH_RSS_L3_SRC_ONLY        (1ULL << 63)
+#define ETH_RSS_L3_SRC_ONLY	RTE_ETH_RSS_L3_SRC_ONLY
+#define RTE_ETH_RSS_L3_DST_ONLY        (1ULL << 62)
+#define ETH_RSS_L3_DST_ONLY	RTE_ETH_RSS_L3_DST_ONLY
+#define RTE_ETH_RSS_L4_SRC_ONLY        (1ULL << 61)
+#define ETH_RSS_L4_SRC_ONLY	RTE_ETH_RSS_L4_SRC_ONLY
+#define RTE_ETH_RSS_L4_DST_ONLY        (1ULL << 60)
+#define ETH_RSS_L4_DST_ONLY	RTE_ETH_RSS_L4_DST_ONLY
+#define RTE_ETH_RSS_L2_SRC_ONLY        (1ULL << 59)
+#define ETH_RSS_L2_SRC_ONLY	RTE_ETH_RSS_L2_SRC_ONLY
+#define RTE_ETH_RSS_L2_DST_ONLY        (1ULL << 58)
+#define ETH_RSS_L2_DST_ONLY	RTE_ETH_RSS_L2_DST_ONLY
 
 /*
  * Only select IPV6 address prefix as RSS input set according to
  * https://tools.ietf.org/html/rfc6052
- * Must be combined with ETH_RSS_IPV6, ETH_RSS_NONFRAG_IPV6_UDP,
- * ETH_RSS_NONFRAG_IPV6_TCP, ETH_RSS_NONFRAG_IPV6_SCTP.
+ * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
+ * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
  */
 #define RTE_ETH_RSS_L3_PRE32	   (1ULL << 57)
 #define RTE_ETH_RSS_L3_PRE40	   (1ULL << 56)
@@ -580,22 +662,27 @@ struct rte_eth_rss_conf {
  * It basically stands for the innermost encapsulation level RSS
  * can be performed on according to PMD and device capabilities.
  */
-#define ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT       (0ULL << 50)
+#define ETH_RSS_LEVEL_PMD_DEFAULT	RTE_ETH_RSS_LEVEL_PMD_DEFAULT
 
 /**
  * level 1, requests RSS to be performed on the outermost packet
  * encapsulation level.
  */
-#define ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define RTE_ETH_RSS_LEVEL_OUTERMOST         (1ULL << 50)
+#define ETH_RSS_LEVEL_OUTERMOST	RTE_ETH_RSS_LEVEL_OUTERMOST
 
 /**
  * level 2, requests RSS to be performed on the specified inner packet
  * encapsulation level, from outermost to innermost (lower to higher values).
  */
-#define ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
-#define ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define RTE_ETH_RSS_LEVEL_INNERMOST         (2ULL << 50)
+#define ETH_RSS_LEVEL_INNERMOST	RTE_ETH_RSS_LEVEL_INNERMOST
+#define RTE_ETH_RSS_LEVEL_MASK              (3ULL << 50)
+#define ETH_RSS_LEVEL_MASK	RTE_ETH_RSS_LEVEL_MASK
 
-#define ETH_RSS_LEVEL(rss_hf) ((rss_hf & ETH_RSS_LEVEL_MASK) >> 50)
+#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
+#define ETH_RSS_LEVEL(rss_hf)	RTE_ETH_RSS_LEVEL(rss_hf)
 
 /**
  * For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -610,222 +697,286 @@ struct rte_eth_rss_conf {
 static inline uint64_t
 rte_eth_rss_hf_refine(uint64_t rss_hf)
 {
-	if ((rss_hf & ETH_RSS_L3_SRC_ONLY) && (rss_hf & ETH_RSS_L3_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY);
 
-	if ((rss_hf & ETH_RSS_L4_SRC_ONLY) && (rss_hf & ETH_RSS_L4_DST_ONLY))
-		rss_hf &= ~(ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY);
+	if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY))
+		rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY);
 
 	return rss_hf;
 }
 
-#define ETH_RSS_IPV6_PRE32 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE32 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32	RTE_ETH_RSS_IPV6_PRE32
 
-#define ETH_RSS_IPV6_PRE40 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE40 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40	RTE_ETH_RSS_IPV6_PRE40
 
-#define ETH_RSS_IPV6_PRE48 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE48 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48	RTE_ETH_RSS_IPV6_PRE48
 
-#define ETH_RSS_IPV6_PRE56 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE56 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56	RTE_ETH_RSS_IPV6_PRE56
 
-#define ETH_RSS_IPV6_PRE64 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE64 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64	RTE_ETH_RSS_IPV6_PRE64
 
-#define ETH_RSS_IPV6_PRE96 ( \
-		ETH_RSS_IPV6 | \
+#define RTE_ETH_RSS_IPV6_PRE96 ( \
+		RTE_ETH_RSS_IPV6 | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96	RTE_ETH_RSS_IPV6_PRE96
 
-#define ETH_RSS_IPV6_PRE32_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_UDP	RTE_ETH_RSS_IPV6_PRE32_UDP
 
-#define ETH_RSS_IPV6_PRE40_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_UDP	RTE_ETH_RSS_IPV6_PRE40_UDP
 
-#define ETH_RSS_IPV6_PRE48_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_UDP	RTE_ETH_RSS_IPV6_PRE48_UDP
 
-#define ETH_RSS_IPV6_PRE56_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_UDP	RTE_ETH_RSS_IPV6_PRE56_UDP
 
-#define ETH_RSS_IPV6_PRE64_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_UDP	RTE_ETH_RSS_IPV6_PRE64_UDP
 
-#define ETH_RSS_IPV6_PRE96_UDP ( \
-		ETH_RSS_NONFRAG_IPV6_UDP | \
+#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_UDP	RTE_ETH_RSS_IPV6_PRE96_UDP
 
-#define ETH_RSS_IPV6_PRE32_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_TCP	RTE_ETH_RSS_IPV6_PRE32_TCP
 
-#define ETH_RSS_IPV6_PRE40_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_TCP	RTE_ETH_RSS_IPV6_PRE40_TCP
 
-#define ETH_RSS_IPV6_PRE48_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_TCP	RTE_ETH_RSS_IPV6_PRE48_TCP
 
-#define ETH_RSS_IPV6_PRE56_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_TCP	RTE_ETH_RSS_IPV6_PRE56_TCP
 
-#define ETH_RSS_IPV6_PRE64_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_TCP	RTE_ETH_RSS_IPV6_PRE64_TCP
 
-#define ETH_RSS_IPV6_PRE96_TCP ( \
-		ETH_RSS_NONFRAG_IPV6_TCP | \
+#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
 		RTE_ETH_RSS_L3_PRE96)
+#define ETH_RSS_IPV6_PRE96_TCP	RTE_ETH_RSS_IPV6_PRE96_TCP
 
-#define ETH_RSS_IPV6_PRE32_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE32)
+#define ETH_RSS_IPV6_PRE32_SCTP	RTE_ETH_RSS_IPV6_PRE32_SCTP
 
-#define ETH_RSS_IPV6_PRE40_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE40)
+#define ETH_RSS_IPV6_PRE40_SCTP	RTE_ETH_RSS_IPV6_PRE40_SCTP
 
-#define ETH_RSS_IPV6_PRE48_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE48)
+#define ETH_RSS_IPV6_PRE48_SCTP	RTE_ETH_RSS_IPV6_PRE48_SCTP
 
-#define ETH_RSS_IPV6_PRE56_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE56)
+#define ETH_RSS_IPV6_PRE56_SCTP	RTE_ETH_RSS_IPV6_PRE56_SCTP
 
-#define ETH_RSS_IPV6_PRE64_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE64)
+#define ETH_RSS_IPV6_PRE64_SCTP	RTE_ETH_RSS_IPV6_PRE64_SCTP
 
-#define ETH_RSS_IPV6_PRE96_SCTP ( \
-		ETH_RSS_NONFRAG_IPV6_SCTP | \
+#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
+		RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
 		RTE_ETH_RSS_L3_PRE96)
-
-#define ETH_RSS_IP ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_IPV6_EX)
-
-#define ETH_RSS_UDP ( \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_IPV6_UDP_EX)
-
-#define ETH_RSS_TCP ( \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_IPV6_TCP_EX)
-
-#define ETH_RSS_SCTP ( \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define ETH_RSS_TUNNEL ( \
-	ETH_RSS_VXLAN  | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE)
-
-#define ETH_RSS_VLAN ( \
-	ETH_RSS_S_VLAN  | \
-	ETH_RSS_C_VLAN)
+#define ETH_RSS_IPV6_PRE96_SCTP	RTE_ETH_RSS_IPV6_PRE96_SCTP
+
+#define RTE_ETH_RSS_IP ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_IPV6_EX)
+#define ETH_RSS_IP	RTE_ETH_RSS_IP
+
+#define RTE_ETH_RSS_UDP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_IPV6_UDP_EX)
+#define ETH_RSS_UDP	RTE_ETH_RSS_UDP
+
+#define RTE_ETH_RSS_TCP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_IPV6_TCP_EX)
+#define ETH_RSS_TCP	RTE_ETH_RSS_TCP
+
+#define RTE_ETH_RSS_SCTP ( \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
+#define ETH_RSS_SCTP	RTE_ETH_RSS_SCTP
+
+#define RTE_ETH_RSS_TUNNEL ( \
+	RTE_ETH_RSS_VXLAN  | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE)
+#define ETH_RSS_TUNNEL	RTE_ETH_RSS_TUNNEL
+
+#define RTE_ETH_RSS_VLAN ( \
+	RTE_ETH_RSS_S_VLAN  | \
+	RTE_ETH_RSS_C_VLAN)
+#define ETH_RSS_VLAN	RTE_ETH_RSS_VLAN
 
 /**< Mask of valid RSS hash protocols */
-#define ETH_RSS_PROTO_MASK ( \
-	ETH_RSS_IPV4 | \
-	ETH_RSS_FRAG_IPV4 | \
-	ETH_RSS_NONFRAG_IPV4_TCP | \
-	ETH_RSS_NONFRAG_IPV4_UDP | \
-	ETH_RSS_NONFRAG_IPV4_SCTP | \
-	ETH_RSS_NONFRAG_IPV4_OTHER | \
-	ETH_RSS_IPV6 | \
-	ETH_RSS_FRAG_IPV6 | \
-	ETH_RSS_NONFRAG_IPV6_TCP | \
-	ETH_RSS_NONFRAG_IPV6_UDP | \
-	ETH_RSS_NONFRAG_IPV6_SCTP | \
-	ETH_RSS_NONFRAG_IPV6_OTHER | \
-	ETH_RSS_L2_PAYLOAD | \
-	ETH_RSS_IPV6_EX | \
-	ETH_RSS_IPV6_TCP_EX | \
-	ETH_RSS_IPV6_UDP_EX | \
-	ETH_RSS_PORT  | \
-	ETH_RSS_VXLAN | \
-	ETH_RSS_GENEVE | \
-	ETH_RSS_NVGRE | \
-	ETH_RSS_MPLS)
+#define RTE_ETH_RSS_PROTO_MASK ( \
+	RTE_ETH_RSS_IPV4 | \
+	RTE_ETH_RSS_FRAG_IPV4 | \
+	RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \
+	RTE_ETH_RSS_IPV6 | \
+	RTE_ETH_RSS_FRAG_IPV6 | \
+	RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
+	RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
+	RTE_ETH_RSS_L2_PAYLOAD | \
+	RTE_ETH_RSS_IPV6_EX | \
+	RTE_ETH_RSS_IPV6_TCP_EX | \
+	RTE_ETH_RSS_IPV6_UDP_EX | \
+	RTE_ETH_RSS_PORT  | \
+	RTE_ETH_RSS_VXLAN | \
+	RTE_ETH_RSS_GENEVE | \
+	RTE_ETH_RSS_NVGRE | \
+	RTE_ETH_RSS_MPLS)
+#define ETH_RSS_PROTO_MASK	RTE_ETH_RSS_PROTO_MASK
 
 /*
  * Definitions used for redirection table entry size.
  * Some RSS RETA sizes may not be supported by some drivers, check the
  * documentation or the description of relevant functions for more details.
  */
-#define ETH_RSS_RETA_SIZE_64  64
-#define ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_512 512
-#define RTE_RETA_GROUP_SIZE   64
+#define RTE_ETH_RSS_RETA_SIZE_64  64
+#define ETH_RSS_RETA_SIZE_64	RTE_ETH_RSS_RETA_SIZE_64
+#define RTE_ETH_RSS_RETA_SIZE_128 128
+#define ETH_RSS_RETA_SIZE_128	RTE_ETH_RSS_RETA_SIZE_128
+#define RTE_ETH_RSS_RETA_SIZE_256 256
+#define ETH_RSS_RETA_SIZE_256	RTE_ETH_RSS_RETA_SIZE_256
+#define RTE_ETH_RSS_RETA_SIZE_512 512
+#define ETH_RSS_RETA_SIZE_512	RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_ETH_RETA_GROUP_SIZE   64
+#define RTE_RETA_GROUP_SIZE	RTE_ETH_RETA_GROUP_SIZE
 
 /* Definitions used for VMDQ and DCB functionality */
-#define ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDQ vlan filters. */
-#define ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
-#define ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDQ DCB queues. */
-#define ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS   64 /**< Maximum nb. of VMDQ vlan filters. */
+#define ETH_VMDQ_MAX_VLAN_FILTERS	RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8  /**< Maximum nb. of DCB priorities. */
+#define ETH_DCB_NUM_USER_PRIORITIES	RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define RTE_ETH_VMDQ_DCB_NUM_QUEUES     128 /**< Maximum nb. of VMDQ DCB queues. */
+#define ETH_VMDQ_DCB_NUM_QUEUES	RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define RTE_ETH_DCB_NUM_QUEUES          128 /**< Maximum nb. of DCB queues. */
+#define ETH_DCB_NUM_QUEUES	RTE_ETH_DCB_NUM_QUEUES
 
 /* DCB capability defines */
-#define ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
-#define ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define RTE_ETH_DCB_PG_SUPPORT      0x00000001 /**< Priority Group(ETS) support. */
+#define ETH_DCB_PG_SUPPORT	RTE_ETH_DCB_PG_SUPPORT
+#define RTE_ETH_DCB_PFC_SUPPORT     0x00000002 /**< Priority Flow Control support. */
+#define ETH_DCB_PFC_SUPPORT	RTE_ETH_DCB_PFC_SUPPORT
 
 /* Definitions used for VLAN Offload functionality */
-#define ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
+#define RTE_ETH_VLAN_STRIP_OFFLOAD   0x0001 /**< VLAN Strip  On/Off */
+#define ETH_VLAN_STRIP_OFFLOAD	RTE_ETH_VLAN_STRIP_OFFLOAD
+#define RTE_ETH_VLAN_FILTER_OFFLOAD  0x0002 /**< VLAN Filter On/Off */
+#define ETH_VLAN_FILTER_OFFLOAD	RTE_ETH_VLAN_FILTER_OFFLOAD
+#define RTE_ETH_VLAN_EXTEND_OFFLOAD  0x0004 /**< VLAN Extend On/Off */
+#define ETH_VLAN_EXTEND_OFFLOAD	RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define RTE_ETH_QINQ_STRIP_OFFLOAD   0x0008 /**< QINQ Strip On/Off */
+#define ETH_QINQ_STRIP_OFFLOAD	RTE_ETH_QINQ_STRIP_OFFLOAD
 
 /* Definitions used for mask VLAN setting */
-#define ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
-#define ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
-#define ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
-#define ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
-#define ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define RTE_ETH_VLAN_STRIP_MASK   0x0001 /**< VLAN Strip  setting mask */
+#define ETH_VLAN_STRIP_MASK	RTE_ETH_VLAN_STRIP_MASK
+#define RTE_ETH_VLAN_FILTER_MASK  0x0002 /**< VLAN Filter  setting mask*/
+#define ETH_VLAN_FILTER_MASK	RTE_ETH_VLAN_FILTER_MASK
+#define RTE_ETH_VLAN_EXTEND_MASK  0x0004 /**< VLAN Extend  setting mask*/
+#define ETH_VLAN_EXTEND_MASK	RTE_ETH_VLAN_EXTEND_MASK
+#define RTE_ETH_QINQ_STRIP_MASK   0x0008 /**< QINQ Strip  setting mask */
+#define ETH_QINQ_STRIP_MASK	RTE_ETH_QINQ_STRIP_MASK
+#define RTE_ETH_VLAN_ID_MAX       0x0FFF /**< VLAN ID is in lower 12 bits*/
+#define ETH_VLAN_ID_MAX		RTE_ETH_VLAN_ID_MAX
 
 /* Definitions used for receive MAC address   */
-#define ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define RTE_ETH_NUM_RECEIVE_MAC_ADDR  128 /**< Maximum nb. of receive mac addr. */
+#define ETH_NUM_RECEIVE_MAC_ADDR	RTE_ETH_NUM_RECEIVE_MAC_ADDR
 
 /* Definitions used for unicast hash  */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY  128 /**< Maximum nb. of UC hash array. */
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY	RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
 
 /* Definitions used for VMDQ pool rx mode setting */
-#define ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
-#define ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
-#define ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
-#define ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
-#define ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define RTE_ETH_VMDQ_ACCEPT_UNTAG   0x0001 /**< accept untagged packets. */
+#define ETH_VMDQ_ACCEPT_UNTAG	RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define RTE_ETH_VMDQ_ACCEPT_HASH_MC 0x0002 /**< accept packets in multicast table . */
+#define ETH_VMDQ_ACCEPT_HASH_MC	RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define RTE_ETH_VMDQ_ACCEPT_HASH_UC 0x0004 /**< accept packets in unicast table. */
+#define ETH_VMDQ_ACCEPT_HASH_UC	RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define RTE_ETH_VMDQ_ACCEPT_BROADCAST   0x0008 /**< accept broadcast packets. */
+#define ETH_VMDQ_ACCEPT_BROADCAST	RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define RTE_ETH_VMDQ_ACCEPT_MULTICAST   0x0010 /**< multicast promiscuous. */
+#define ETH_VMDQ_ACCEPT_MULTICAST	RTE_ETH_VMDQ_ACCEPT_MULTICAST
 
 /** Maximum nb. of vlan per mirror rule */
-#define ETH_MIRROR_MAX_VLANS       64
+#define RTE_ETH_MIRROR_MAX_VLANS       64
+#define ETH_MIRROR_MAX_VLANS	RTE_ETH_MIRROR_MAX_VLANS
 
-#define ETH_MIRROR_VIRTUAL_POOL_UP     0x01  /**< Virtual Pool uplink Mirroring. */
-#define ETH_MIRROR_UPLINK_PORT         0x02  /**< Uplink Port Mirroring. */
-#define ETH_MIRROR_DOWNLINK_PORT       0x04  /**< Downlink Port Mirroring. */
-#define ETH_MIRROR_VLAN                0x08  /**< VLAN Mirroring. */
-#define ETH_MIRROR_VIRTUAL_POOL_DOWN   0x10  /**< Virtual Pool downlink Mirroring. */
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_UP     0x01  /**< Virtual Pool uplink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_UP	RTE_ETH_MIRROR_VIRTUAL_POOL_UP
+#define RTE_ETH_MIRROR_UPLINK_PORT         0x02  /**< Uplink Port Mirroring. */
+#define ETH_MIRROR_UPLINK_PORT	RTE_ETH_MIRROR_UPLINK_PORT
+#define RTE_ETH_MIRROR_DOWNLINK_PORT       0x04  /**< Downlink Port Mirroring. */
+#define ETH_MIRROR_DOWNLINK_PORT	RTE_ETH_MIRROR_DOWNLINK_PORT
+#define RTE_ETH_MIRROR_VLAN                0x08  /**< VLAN Mirroring. */
+#define ETH_MIRROR_VLAN		RTE_ETH_MIRROR_VLAN
+#define RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN   0x10  /**< Virtual Pool downlink Mirroring. */
+#define ETH_MIRROR_VIRTUAL_POOL_DOWN	RTE_ETH_MIRROR_VIRTUAL_POOL_DOWN
 
 /**
  * A structure used to configure VLAN traffic mirror of an Ethernet port.
@@ -833,7 +984,7 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
 struct rte_eth_vlan_mirror {
 	uint64_t vlan_mask; /**< mask for valid VLAN ID. */
 	/** VLAN ID list for vlan mirroring. */
-	uint16_t vlan_id[ETH_MIRROR_MAX_VLANS];
+	uint16_t vlan_id[RTE_ETH_MIRROR_MAX_VLANS];
 };
 
 /**
@@ -856,7 +1007,7 @@ struct rte_eth_mirror_conf {
 struct rte_eth_rss_reta_entry64 {
 	uint64_t mask;
 	/**< Mask bits indicate which entries need to be updated/queried. */
-	uint16_t reta[RTE_RETA_GROUP_SIZE];
+	uint16_t reta[RTE_ETH_RETA_GROUP_SIZE];
 	/**< Group of 64 redirection table entries. */
 };
 
@@ -865,38 +1016,44 @@ struct rte_eth_rss_reta_entry64 {
  * in DCB configurations
  */
 enum rte_eth_nb_tcs {
-	ETH_4_TCS = 4, /**< 4 TCs with DCB. */
-	ETH_8_TCS = 8  /**< 8 TCs with DCB. */
+	RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
+	RTE_ETH_8_TCS = 8  /**< 8 TCs with DCB. */
 };
+#define ETH_4_TCS RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_ETH_8_TCS
 
 /**
  * This enum indicates the possible number of queue pools
  * in VMDQ configurations.
  */
 enum rte_eth_nb_pools {
-	ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
-	ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
-	ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
-	ETH_64_POOLS = 64   /**< 64 VMDq pools. */
+	RTE_ETH_8_POOLS = 8,    /**< 8 VMDq pools. */
+	RTE_ETH_16_POOLS = 16,  /**< 16 VMDq pools. */
+	RTE_ETH_32_POOLS = 32,  /**< 32 VMDq pools. */
+	RTE_ETH_64_POOLS = 64   /**< 64 VMDq pools. */
 };
+#define ETH_8_POOLS	RTE_ETH_8_POOLS
+#define ETH_16_POOLS	RTE_ETH_16_POOLS
+#define ETH_32_POOLS	RTE_ETH_32_POOLS
+#define ETH_64_POOLS	RTE_ETH_64_POOLS
 
 /* This structure may be extended in future. */
 struct rte_eth_dcb_rx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_dcb_tx_conf {
 	enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_dcb_tx_conf {
 	enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */
 	/** Traffic class each UP mapped to. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 };
 
 struct rte_eth_vmdq_tx_conf {
@@ -922,8 +1079,8 @@ struct rte_eth_vmdq_dcb_conf {
 	struct {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
-	uint8_t dcb_tc[ETH_DCB_NUM_USER_PRIORITIES];
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+	uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES];
 	/**< Selects a queue in a pool */
 };
 
@@ -934,7 +1091,7 @@ struct rte_eth_vmdq_dcb_conf {
  * Using this feature, packets are routed to a pool of queues. By default,
  * the pool selection is based on the MAC address, the vlan id in the
  * vlan tag as specified in the pool_map array.
- * Passing the ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
+ * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool
  * selection using only the MAC address. MAC address to pool mapping is done
  * using the rte_eth_dev_mac_addr_add function, with the pool parameter
  * corresponding to the pool id.
@@ -955,7 +1112,7 @@ struct rte_eth_vmdq_rx_conf {
 	struct {
 		uint16_t vlan_id; /**< The vlan id of the received frame */
 		uint64_t pools;   /**< Bitmask of pools for packet rx */
-	} pool_map[ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
+	} pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq vlan pool maps. */
 };
 
 /**
@@ -964,7 +1121,7 @@ struct rte_eth_vmdq_rx_conf {
 struct rte_eth_txmode {
 	enum rte_eth_tx_mq_mode mq_mode; /**< TX multi-queues mode. */
 	/**
-	 * Per-port Tx offloads to be set using DEV_TX_OFFLOAD_* flags.
+	 * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_offload_capa field on rte_eth_dev_info
 	 * structure are allowed to be set.
 	 */
@@ -1048,7 +1205,7 @@ struct rte_eth_rxconf {
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */
 	/**
-	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags.
 	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1077,7 +1234,7 @@ struct rte_eth_txconf {
 
 	uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
 	/**
-	 * Per-queue Tx offloads to be set  using DEV_TX_OFFLOAD_* flags.
+	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_* flags.
 	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa
 	 * fields on rte_eth_dev_info structure are allowed to be set.
 	 */
@@ -1188,12 +1345,17 @@ struct rte_eth_desc_lim {
  * This enum indicates the flow control mode
  */
 enum rte_eth_fc_mode {
-	RTE_FC_NONE = 0, /**< Disable flow control. */
-	RTE_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
-	RTE_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
-	RTE_FC_FULL      /**< Enable flow control on both side. */
+	RTE_ETH_FC_NONE = 0, /**< Disable flow control. */
+	RTE_ETH_FC_RX_PAUSE, /**< RX pause frame, enable flowctrl on TX side. */
+	RTE_ETH_FC_TX_PAUSE, /**< TX pause frame, enable flowctrl on RX side. */
+	RTE_ETH_FC_FULL      /**< Enable flow control on both side. */
 };
 
+#define RTE_FC_NONE	RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE	RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE	RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL	RTE_ETH_FC_FULL
+
 /**
  * A structure used to configure Ethernet flow control parameter.
  * These parameters will be configured into the register of the NIC.
@@ -1224,18 +1386,29 @@ struct rte_eth_pfc_conf {
  * @see rte_eth_udp_tunnel
  */
 enum rte_eth_tunnel_type {
-	RTE_TUNNEL_TYPE_NONE = 0,
-	RTE_TUNNEL_TYPE_VXLAN,
-	RTE_TUNNEL_TYPE_GENEVE,
-	RTE_TUNNEL_TYPE_TEREDO,
-	RTE_TUNNEL_TYPE_NVGRE,
-	RTE_TUNNEL_TYPE_IP_IN_GRE,
-	RTE_L2_TUNNEL_TYPE_E_TAG,
-	RTE_TUNNEL_TYPE_VXLAN_GPE,
-	RTE_TUNNEL_TYPE_ECPRI,
-	RTE_TUNNEL_TYPE_MAX,
+	RTE_ETH_TUNNEL_TYPE_NONE = 0,
+	RTE_ETH_TUNNEL_TYPE_VXLAN,
+	RTE_ETH_TUNNEL_TYPE_GENEVE,
+	RTE_ETH_TUNNEL_TYPE_TEREDO,
+	RTE_ETH_TUNNEL_TYPE_NVGRE,
+	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE,
+	RTE_ETH_L2_TUNNEL_TYPE_E_TAG,
+	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE,
+	RTE_ETH_TUNNEL_TYPE_ECPRI,
+	RTE_ETH_TUNNEL_TYPE_MAX,
 };
 
+#define RTE_TUNNEL_TYPE_NONE		RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN		RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE		RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO		RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE		RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE	RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG	RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE	RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI		RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX		RTE_ETH_TUNNEL_TYPE_MAX
+
 /* Deprecated API file for rte_eth_dev_filter_* functions */
 #include "rte_eth_ctrl.h"
 
@@ -1243,11 +1416,16 @@ enum rte_eth_tunnel_type {
  *  Memory space that can be configured to store Flow Director filters
  *  in the board memory.
  */
-enum rte_fdir_pballoc_type {
-	RTE_FDIR_PBALLOC_64K = 0,  /**< 64k. */
-	RTE_FDIR_PBALLOC_128K,     /**< 128k. */
-	RTE_FDIR_PBALLOC_256K,     /**< 256k. */
+enum rte_eth_fdir_pballoc_type {
+	RTE_ETH_FDIR_PBALLOC_64K = 0,  /**< 64k. */
+	RTE_ETH_FDIR_PBALLOC_128K,     /**< 128k. */
+	RTE_ETH_FDIR_PBALLOC_256K,     /**< 256k. */
 };
+#define rte_fdir_pballoc_type	rte_eth_fdir_pballoc_type
+
+#define RTE_FDIR_PBALLOC_64K	RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K	RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K	RTE_ETH_FDIR_PBALLOC_256K
 
 /**
  *  Select report mode of FDIR hash information in RX descriptors.
@@ -1264,9 +1442,9 @@ enum rte_fdir_status_mode {
  *
  * If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored.
  */
-struct rte_fdir_conf {
+struct rte_eth_fdir_conf {
 	enum rte_fdir_mode mode; /**< Flow Director mode. */
-	enum rte_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
+	enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */
 	enum rte_fdir_status_mode status;  /**< How to report FDIR hash. */
 	/** RX queue of packets matching a "drop" filter in perfect mode. */
 	uint8_t drop_queue;
@@ -1275,6 +1453,8 @@ struct rte_fdir_conf {
 	/**< Flex payload configuration. */
 };
 
+#define rte_fdir_conf rte_eth_fdir_conf
+
 /**
  * UDP tunneling configuration.
  *
@@ -1292,7 +1472,7 @@ struct rte_eth_udp_tunnel {
 /**
  * A structure used to enable/disable specific device interrupts.
  */
-struct rte_intr_conf {
+struct rte_eth_intr_conf {
 	/** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */
 	uint32_t lsc:1;
 	/** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */
@@ -1301,18 +1481,20 @@ struct rte_intr_conf {
 	uint32_t rmv:1;
 };
 
+#define rte_intr_conf rte_eth_intr_conf
+
 /**
  * A structure used to configure an Ethernet port.
  * Depending upon the RX multi-queue mode, extra advanced
  * configuration settings may be needed.
  */
 struct rte_eth_conf {
-	uint32_t link_speeds; /**< bitmap of ETH_LINK_SPEED_XXX of speeds to be
-				used. ETH_LINK_SPEED_FIXED disables link
+	uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be
+				used. RTE_ETH_LINK_SPEED_FIXED disables link
 				autonegotiation, and a unique speed shall be
 				set. Otherwise, the bitmap defines the set of
 				speeds to be advertised. If the special value
-				ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
+				RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds
 				supported are advertised. */
 	struct rte_eth_rxmode rxmode; /**< Port RX configuration. */
 	struct rte_eth_txmode txmode; /**< Port TX configuration. */
@@ -1338,49 +1520,72 @@ struct rte_eth_conf {
 		struct rte_eth_vmdq_tx_conf vmdq_tx_conf;
 		/**< Port vmdq TX configuration. */
 	} tx_adv_conf; /**< Port TX DCB configuration (union). */
-	/** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
-	    is needed,and the variable must be set ETH_DCB_PFC_SUPPORT. */
+	/**
+	 * Currently,Priority Flow Control(PFC) are supported,if DCB with PFC
+	 * is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT.
+	 */
 	uint32_t dcb_capability_en;
-	struct rte_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
-	struct rte_intr_conf intr_conf; /**< Interrupt mode configuration. */
+	struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */
+	struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */
 };
 
 /**
  * RX offload capabilities of a device.
  */
-#define DEV_RX_OFFLOAD_VLAN_STRIP  0x00000001
-#define DEV_RX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_RX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_RX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_RX_OFFLOAD_TCP_LRO     0x00000010
-#define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
-#define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
-#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
-#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
-#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
-#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
-#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP  0x00000001
+#define DEV_RX_OFFLOAD_VLAN_STRIP	RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM  0x00000002
+#define DEV_RX_OFFLOAD_IPV4_CKSUM	RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM   0x00000004
+#define DEV_RX_OFFLOAD_UDP_CKSUM	RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM   0x00000008
+#define DEV_RX_OFFLOAD_TCP_CKSUM	RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_TCP_LRO     0x00000010
+#define DEV_RX_OFFLOAD_TCP_LRO		RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP  0x00000020
+#define DEV_RX_OFFLOAD_QINQ_STRIP	RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_MACSEC_STRIP	RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_FILTER	RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define RTE_ETH_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	RTE_ETH_RX_OFFLOAD_JUMBO_FRAME
+#define RTE_ETH_RX_OFFLOAD_SCATTER	0x00002000
+#define DEV_RX_OFFLOAD_SCATTER		RTE_ETH_RX_OFFLOAD_SCATTER
 /**
  * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_RX_OFFLOAD_TIMESTAMP	0x00004000
-#define DEV_RX_OFFLOAD_SECURITY         0x00008000
-#define DEV_RX_OFFLOAD_KEEP_CRC		0x00010000
-#define DEV_RX_OFFLOAD_SCTP_CKSUM	0x00020000
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
-#define DEV_RX_OFFLOAD_RSS_HASH		0x00080000
+#define RTE_ETH_RX_OFFLOAD_TIMESTAMP	0x00004000
+#define DEV_RX_OFFLOAD_TIMESTAMP	RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define RTE_ETH_RX_OFFLOAD_SECURITY     0x00008000
+#define DEV_RX_OFFLOAD_SECURITY		RTE_ETH_RX_OFFLOAD_SECURITY
+#define RTE_ETH_RX_OFFLOAD_KEEP_CRC	0x00010000
+#define DEV_RX_OFFLOAD_KEEP_CRC		RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM	0x00020000
+#define DEV_RX_OFFLOAD_SCTP_CKSUM	RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM  0x00040000
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM	RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define RTE_ETH_RX_OFFLOAD_RSS_HASH	0x00080000
+#define DEV_RX_OFFLOAD_RSS_HASH	RTE_ETH_RX_OFFLOAD_RSS_HASH
 #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT 0x00100000
 
-#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
-				 DEV_RX_OFFLOAD_UDP_CKSUM | \
-				 DEV_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
-			     DEV_RX_OFFLOAD_VLAN_FILTER | \
-			     DEV_RX_OFFLOAD_VLAN_EXTEND | \
-			     DEV_RX_OFFLOAD_QINQ_STRIP)
+#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
+				 RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_CHECKSUM	RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
+			     RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
+			     RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
+#define DEV_RX_OFFLOAD_VLAN	RTE_ETH_RX_OFFLOAD_VLAN
 
 /*
  * If new Rx offload capabilities are defined, they also must be
@@ -1390,52 +1595,74 @@ struct rte_eth_conf {
 /**
  * TX offload capabilities of a device.
  */
-#define DEV_TX_OFFLOAD_VLAN_INSERT 0x00000001
-#define DEV_TX_OFFLOAD_IPV4_CKSUM  0x00000002
-#define DEV_TX_OFFLOAD_UDP_CKSUM   0x00000004
-#define DEV_TX_OFFLOAD_TCP_CKSUM   0x00000008
-#define DEV_TX_OFFLOAD_SCTP_CKSUM  0x00000010
-#define DEV_TX_OFFLOAD_TCP_TSO     0x00000020
-#define DEV_TX_OFFLOAD_UDP_TSO     0x00000040
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_QINQ_INSERT 0x00000100
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_MACSEC_INSERT    0x00002000
-#define DEV_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT 0x00000001
+#define DEV_TX_OFFLOAD_VLAN_INSERT	RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM  0x00000002
+#define DEV_TX_OFFLOAD_IPV4_CKSUM	RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM   0x00000004
+#define DEV_TX_OFFLOAD_UDP_CKSUM	RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM   0x00000008
+#define DEV_TX_OFFLOAD_TCP_CKSUM	RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM  0x00000010
+#define DEV_TX_OFFLOAD_SCTP_CKSUM	RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define RTE_ETH_TX_OFFLOAD_TCP_TSO     0x00000020
+#define DEV_TX_OFFLOAD_TCP_TSO		RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define RTE_ETH_TX_OFFLOAD_UDP_TSO     0x00000040
+#define DEV_TX_OFFLOAD_UDP_TSO		RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000080 /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM	RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT 0x00000100
+#define DEV_TX_OFFLOAD_QINQ_INSERT	RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO    0x00000200    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO	RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO      0x00000400    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO	RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO	RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO	RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT    0x00002000
+#define DEV_TX_OFFLOAD_MACSEC_INSERT	RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
+#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE	RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
 /**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
  * tx queue without SW lock.
  */
-#define DEV_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS	0x00008000
+#define DEV_TX_OFFLOAD_MULTI_SEGS	RTE_ETH_TX_OFFLOAD_MULTI_SEGS
 /**< Device supports multi segment send. */
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE	0x00010000
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE	RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
 /**< Device supports optimization for fast release of mbufs.
  *   When set application must guarantee that per-queue all mbufs comes from
  *   the same mempool and has refcnt = 1.
  */
-#define DEV_TX_OFFLOAD_SECURITY         0x00020000
+#define RTE_ETH_TX_OFFLOAD_SECURITY         0x00020000
+#define DEV_TX_OFFLOAD_SECURITY	RTE_ETH_TX_OFFLOAD_SECURITY
 /**
  * Device supports generic UDP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_UDP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO      0x00040000
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO	RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
 /**
  * Device supports generic IP tunneled packet TSO.
  * Application must set PKT_TX_TUNNEL_IP and other mbuf fields required
  * for tunnel TSO.
  */
-#define DEV_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO       0x00080000
+#define DEV_TX_OFFLOAD_IP_TNL_TSO	RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
 /** Device supports outer UDP checksum */
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM  0x00100000
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM	RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
 /**
  * Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
  * if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
  * The mbuf field and flag are registered when the offload is configured.
  */
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 0x00200000
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP	RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
 /*
  * If new Tx offload capabilities are defined, they also must be
  * mentioned in rte_tx_offload_names in rte_ethdev.c file.
@@ -1567,7 +1794,7 @@ struct rte_eth_dev_info {
 	uint16_t vmdq_pool_base;  /**< First ID of VMDQ pools. */
 	struct rte_eth_desc_lim rx_desc_lim;  /**< RX descriptors limits */
 	struct rte_eth_desc_lim tx_desc_lim;  /**< TX descriptors limits */
-	uint32_t speed_capa;  /**< Supported speeds bitmap (ETH_LINK_SPEED_). */
+	uint32_t speed_capa;  /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */
 	/** Configured number of rx/tx queues */
 	uint16_t nb_rx_queues; /**< Number of RX queues. */
 	uint16_t nb_tx_queues; /**< Number of TX queues. */
@@ -1672,8 +1899,10 @@ struct rte_eth_xstat_name {
 	char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */
 };
 
-#define ETH_DCB_NUM_TCS    8
-#define ETH_MAX_VMDQ_POOL  64
+#define RTE_ETH_DCB_NUM_TCS    8
+#define ETH_DCB_NUM_TCS	RTE_ETH_DCB_NUM_TCS
+#define RTE_ETH_MAX_VMDQ_POOL  64
+#define ETH_MAX_VMDQ_POOL	RTE_ETH_MAX_VMDQ_POOL
 
 /**
  * A structure used to get the information of queue and
@@ -1684,12 +1913,12 @@ struct rte_eth_dcb_tc_queue_mapping {
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_rxq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 	/** rx queues assigned to tc per Pool */
 	struct {
 		uint16_t base;
 		uint16_t nb_queue;
-	} tc_txq[ETH_MAX_VMDQ_POOL][ETH_DCB_NUM_TCS];
+	} tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS];
 };
 
 /**
@@ -1698,8 +1927,8 @@ struct rte_eth_dcb_tc_queue_mapping {
  */
 struct rte_eth_dcb_info {
 	uint8_t nb_tcs;        /**< number of TCs */
-	uint8_t prio_tc[ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
-	uint8_t tc_bws[ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
+	uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */
+	uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< TX BW percentage for each TC */
 	/** rx queues assigned to tc */
 	struct rte_eth_dcb_tc_queue_mapping tc_queue;
 };
@@ -1723,7 +1952,7 @@ enum rte_eth_fec_mode {
 
 /* A structure used to get capabilities per link speed */
 struct rte_eth_fec_capa {
-	uint32_t speed; /**< Link speed (see ETH_SPEED_NUM_*) */
+	uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */
 	uint32_t capa;  /**< FEC capabilities bitmask */
 };
 
@@ -1749,13 +1978,17 @@ struct rte_eth_fec_capa {
  */
 
 /**< l2 tunnel enable mask */
-#define ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define RTE_ETH_L2_TUNNEL_ENABLE_MASK       0x00000001
+#define ETH_L2_TUNNEL_ENABLE_MASK	RTE_ETH_L2_TUNNEL_ENABLE_MASK
 /**< l2 tunnel insertion mask */
-#define ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define RTE_ETH_L2_TUNNEL_INSERTION_MASK    0x00000002
+#define ETH_L2_TUNNEL_INSERTION_MASK	RTE_ETH_L2_TUNNEL_INSERTION_MASK
 /**< l2 tunnel stripping mask */
-#define ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define RTE_ETH_L2_TUNNEL_STRIPPING_MASK    0x00000004
+#define ETH_L2_TUNNEL_STRIPPING_MASK	RTE_ETH_L2_TUNNEL_STRIPPING_MASK
 /**< l2 tunnel forwarding mask */
-#define ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define RTE_ETH_L2_TUNNEL_FORWARDING_MASK   0x00000008
+#define ETH_L2_TUNNEL_FORWARDING_MASK	RTE_ETH_L2_TUNNEL_FORWARDING_MASK
 
 /**
  * Function type used for RX packet processing packet callbacks.
@@ -2068,14 +2301,14 @@ uint16_t rte_eth_dev_count_total(void);
  * @param speed
  *   Numerical speed value in Mbps
  * @param duplex
- *   ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
+ *   RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds)
  * @return
  *   0 if the speed cannot be mapped
  */
 uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 
 /**
- * Get DEV_RX_OFFLOAD_* flag name.
+ * Get RTE_ETH_RX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2085,7 +2318,7 @@ uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
 const char *rte_eth_dev_rx_offload_name(uint64_t offload);
 
 /**
- * Get DEV_TX_OFFLOAD_* flag name.
+ * Get RTE_ETH_TX_OFFLOAD_* flag name.
  *
  * @param offload
  *   Offload flag.
@@ -2179,7 +2412,7 @@ rte_eth_dev_is_removed(uint16_t port_id);
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
  *   In addition it contains the hardware offloads features to activate using
- *   the DEV_RX_OFFLOAD_* flags.
+ *   the RTE_ETH_RX_OFFLOAD_* flags.
  *   If an offloading set in rx_conf->offloads
  *   hasn't been set in the input argument eth_conf->rxmode.offloads
  *   to rte_eth_dev_configure(), it is a new added offloading, it must be
@@ -2756,7 +2989,7 @@ const char *rte_eth_link_speed_to_str(uint32_t link_speed);
  *
  * @param str
  *   A pointer to a string to be filled with textual representation of
- *   device status. At least ETH_LINK_MAX_STR_LEN bytes should be allocated to
+ *   device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to
  *   store default link status text.
  * @param len
  *   Length of available memory at 'str' string.
@@ -3261,10 +3494,10 @@ int rte_eth_dev_set_vlan_ether_type(uint16_t port_id,
  *   The port identifier of the Ethernet device.
  * @param offload_mask
  *   The VLAN Offload bit mask can be mixed use with "OR"
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  * @return
  *   - (0) if successful.
  *   - (-ENOTSUP) if hardware-assisted VLAN filtering not configured.
@@ -3280,10 +3513,10 @@ int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask);
  *   The port identifier of the Ethernet device.
  * @return
  *   - (>0) if successful. Bit mask to indicate
- *       ETH_VLAN_STRIP_OFFLOAD
- *       ETH_VLAN_FILTER_OFFLOAD
- *       ETH_VLAN_EXTEND_OFFLOAD
- *       ETH_QINQ_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_STRIP_OFFLOAD
+ *       RTE_ETH_VLAN_FILTER_OFFLOAD
+ *       RTE_ETH_VLAN_EXTEND_OFFLOAD
+ *       RTE_ETH_QINQ_STRIP_OFFLOAD
  *   - (-ENODEV) if *port_id* invalid.
  */
 int rte_eth_dev_get_vlan_offload(uint16_t port_id);
@@ -5231,7 +5464,7 @@ static inline int rte_eth_tx_descriptor_status(uint16_t port_id,
  * rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf*  buffers
  * of those packets whose transmission was effectively completed.
  *
- * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
  * invoke this function concurrently on the same tx queue without SW lock.
  * @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads
  *
diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h
index edf96de2dc2e..8e6156a62aa9 100644
--- a/lib/ethdev/rte_ethdev_core.h
+++ b/lib/ethdev/rte_ethdev_core.h
@@ -154,7 +154,7 @@ struct rte_eth_dev_data {
 			/**< Device Ethernet link address.
 			 *   @see rte_eth_dev_release_port()
 			 */
-	uint64_t mac_pool_sel[ETH_NUM_RECEIVE_MAC_ADDR];
+	uint64_t mac_pool_sel[RTE_ETH_NUM_RECEIVE_MAC_ADDR];
 			/**< Bitmap associating MAC addresses to pools. */
 	struct rte_ether_addr *hash_mac_addrs;
 			/**< Device Ethernet MAC addresses of hash filtering.
diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h
index 70f455d47d60..4152067368b8 100644
--- a/lib/ethdev/rte_flow.h
+++ b/lib/ethdev/rte_flow.h
@@ -2593,7 +2593,7 @@ struct rte_flow_action_rss {
 	 * through.
 	 */
 	uint32_t level;
-	uint64_t types; /**< Specific RSS hash types (see ETH_RSS_*). */
+	uint64_t types; /**< Specific RSS hash types (see RTE_ETH_RSS_*). */
 	uint32_t key_len; /**< Hash key length in bytes. */
 	uint32_t queue_num; /**< Number of entries in @p queue. */
 	const uint8_t *key; /**< Hash key. */
diff --git a/lib/gso/rte_gso.c b/lib/gso/rte_gso.c
index 0d02ec3cee05..119fdcac0b7f 100644
--- a/lib/gso/rte_gso.c
+++ b/lib/gso/rte_gso.c
@@ -15,13 +15,13 @@
 #include "gso_udp4.h"
 
 #define ILLEGAL_UDP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & DEV_TX_OFFLOAD_UDP_TSO) == 0) || \
+	((((ctx)->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO) == 0) || \
 	 (ctx)->gso_size < RTE_GSO_UDP_SEG_SIZE_MIN)
 
 #define ILLEGAL_TCP_GSO_CTX(ctx) \
-	((((ctx)->gso_types & (DEV_TX_OFFLOAD_TCP_TSO | \
-		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
-		DEV_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
+	((((ctx)->gso_types & (RTE_ETH_TX_OFFLOAD_TCP_TSO | \
+		RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)) == 0) || \
 		(ctx)->gso_size < RTE_GSO_SEG_SIZE_MIN)
 
 int
@@ -54,28 +54,28 @@ rte_gso_segment(struct rte_mbuf *pkt,
 	ol_flags = pkt->ol_flags;
 
 	if ((IS_IPV4_VXLAN_TCP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO)) ||
 			((IS_IPV4_GRE_TCP4(pkt->ol_flags) &&
-			 (gso_ctx->gso_types & DEV_TX_OFFLOAD_GRE_TNL_TSO)))) {
+			 (gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO)))) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tunnel_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_VXLAN_UDP4(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_VXLAN_TNL_TSO) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) &&
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_tunnel_udp4_segment(pkt, gso_size,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_TCP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_TCP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_TCP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_TCP_SEG);
 		ret = gso_tcp4_segment(pkt, gso_size, ipid_delta,
 				direct_pool, indirect_pool,
 				pkts_out, nb_pkts_out);
 	} else if (IS_IPV4_UDP(pkt->ol_flags) &&
-			(gso_ctx->gso_types & DEV_TX_OFFLOAD_UDP_TSO)) {
+			(gso_ctx->gso_types & RTE_ETH_TX_OFFLOAD_UDP_TSO)) {
 		pkt->ol_flags &= (~PKT_TX_UDP_SEG);
 		ret = gso_udp4_segment(pkt, gso_size, direct_pool,
 				indirect_pool, pkts_out, nb_pkts_out);
diff --git a/lib/gso/rte_gso.h b/lib/gso/rte_gso.h
index d93ee8e5b171..0a65afc11e64 100644
--- a/lib/gso/rte_gso.h
+++ b/lib/gso/rte_gso.h
@@ -52,11 +52,11 @@ struct rte_gso_ctx {
 	uint32_t gso_types;
 	/**< the bit mask of required GSO types. The GSO library
 	 * uses the same macros as that of describing device TX
-	 * offloading capabilities (i.e. DEV_TX_OFFLOAD_*_TSO) for
+	 * offloading capabilities (i.e. RTE_ETH_TX_OFFLOAD_*_TSO) for
 	 * gso_types.
 	 *
 	 * For example, if applications want to segment TCP/IPv4
-	 * packets, set DEV_TX_OFFLOAD_TCP_TSO in gso_types.
+	 * packets, set RTE_ETH_TX_OFFLOAD_TCP_TSO in gso_types.
 	 */
 	uint16_t gso_size;
 	/**< maximum size of an output GSO segment, including packet
diff --git a/lib/mbuf/rte_mbuf_core.h b/lib/mbuf/rte_mbuf_core.h
index bb38d7f58102..50e611e887bf 100644
--- a/lib/mbuf/rte_mbuf_core.h
+++ b/lib/mbuf/rte_mbuf_core.h
@@ -192,7 +192,7 @@ extern "C" {
  * The detection of PKT_RX_OUTER_L4_CKSUM_GOOD shall be based on the given
  * HW capability, At minimum, the PMD should support
  * PKT_RX_OUTER_L4_CKSUM_UNKNOWN and PKT_RX_OUTER_L4_CKSUM_BAD states
- * if the DEV_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
+ * if the RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM offload is available.
  */
 #define PKT_RX_OUTER_L4_CKSUM_MASK	((1ULL << 21) | (1ULL << 22))
 
@@ -215,7 +215,7 @@ extern "C" {
  * a) Fill outer_l2_len and outer_l3_len in mbuf.
  * b) Set the PKT_TX_OUTER_UDP_CKSUM flag.
  * c) Set the PKT_TX_OUTER_IPV4 or PKT_TX_OUTER_IPV6 flag.
- * 2) Configure DEV_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
+ * 2) Configure RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM offload flag.
  */
 #define PKT_TX_OUTER_UDP_CKSUM     (1ULL << 41)
 
@@ -258,7 +258,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_GRE
  * or PKT_TX_TUNNEL_IPIP if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_IP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_IP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
@@ -271,7 +271,7 @@ extern "C" {
  * It can be used for tunnels which are not standards or listed above.
  * It is preferred to use specific tunnel flags like PKT_TX_TUNNEL_VXLAN
  * if possible.
- * The ethdev must be configured with DEV_TX_OFFLOAD_UDP_TNL_TSO.
+ * The ethdev must be configured with RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO.
  * Outer and inner checksums are done according to the existing flags like
  * PKT_TX_xxx_CKSUM.
  * Specific tunnel headers that contain payload length, sequence id
diff --git a/lib/mbuf/rte_mbuf_dyn.h b/lib/mbuf/rte_mbuf_dyn.h
index 13f06d8ed25b..be43f8c328e1 100644
--- a/lib/mbuf/rte_mbuf_dyn.h
+++ b/lib/mbuf/rte_mbuf_dyn.h
@@ -37,7 +37,7 @@
  *   of the dynamic field to be registered:
  *   const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
  * - The application initializes the PMD, and asks for this feature
- *   at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ *   at port initialization by passing RTE_ETH_RX_OFFLOAD_MY_FEATURE in
  *   rxconf. This will make the PMD to register the field by calling
  *   rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
  *   stores the returned offset.
-- 
2.31.1


^ permalink raw reply	[relevance 1%]

* Re: [dpdk-dev] [PATCH v2] eventdev: update crypto adapter metadata structures
  @ 2021-08-31  6:08  3%   ` Akhil Goyal
  2021-08-31  6:51  0%     ` Shijith Thotton
  2021-08-31  7:56  6%   ` [dpdk-dev] [PATCH v3] " Shijith Thotton
  1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2021-08-31  6:08 UTC (permalink / raw)
  To: Shijith Thotton, dev
  Cc: Shijith Thotton, Jerin Jacob Kollanukkaran, Anoob Joseph,
	Pavan Nikhilesh Bhagavatula, Abhinandan Gujjar, Ray Kinsella,
	Ankur Dwivedi

> In crypto adapter metadata, reserved bytes in request info structure is
> a space holder for response info. It enforces an order of operation if
> the structures are updated using memcpy to avoid overwriting response
> info. It is logical to move the reserved space out of request info. It
> also solves the ordering issue mentioned before.
> 
> This patch removes the reserve field from request info and makes event
> crypto metadata type to structure from union to make space for response
> info.
> 
> App and drivers are updated as per metadata change.
> 
> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
> ---
> v2:
> * Updated deprecation notice.
> 
Please also update release notes API/ABI section for the changes introduced in this patch.

^ permalink raw reply	[relevance 3%]

* Re: [dpdk-dev] [PATCH v2] eventdev: update crypto adapter metadata structures
  2021-08-31  6:08  3%   ` Akhil Goyal
@ 2021-08-31  6:51  0%     ` Shijith Thotton
  0 siblings, 0 replies; 200+ results
From: Shijith Thotton @ 2021-08-31  6:51 UTC (permalink / raw)
  To: Akhil Goyal, dev
  Cc: Jerin Jacob Kollanukkaran, Anoob Joseph,
	Pavan Nikhilesh Bhagavatula, Abhinandan Gujjar, Ray Kinsella,
	Ankur Dwivedi

>
>> In crypto adapter metadata, reserved bytes in request info structure is
>> a space holder for response info. It enforces an order of operation if
>> the structures are updated using memcpy to avoid overwriting response
>> info. It is logical to move the reserved space out of request info. It
>> also solves the ordering issue mentioned before.
>>
>> This patch removes the reserve field from request info and makes event
>> crypto metadata type to structure from union to make space for response
>> info.
>>
>> App and drivers are updated as per metadata change.
>>
>> Signed-off-by: Shijith Thotton <sthotton@marvell.com>
>> ---
>> v2:
>> * Updated deprecation notice.
>>
>Please also update release notes API/ABI section for the changes introduced in
>this patch.
I will send v3 with the changes.

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v3] eventdev: update crypto adapter metadata structures
    2021-08-31  6:08  3%   ` Akhil Goyal
@ 2021-08-31  7:56  6%   ` Shijith Thotton
  1 sibling, 0 replies; 200+ results
From: Shijith Thotton @ 2021-08-31  7:56 UTC (permalink / raw)
  To: dev
  Cc: Shijith Thotton, jerinj, anoobj, pbhagavatula, gakhil,
	Abhinandan Gujjar, Ray Kinsella, Ankur Dwivedi

In crypto adapter metadata, reserved bytes in request info structure is
a space holder for response info. It enforces an order of operation if
the structures are updated using memcpy to avoid overwriting response
info. It is logical to move the reserved space out of request info. It
also solves the ordering issue mentioned before.

This patch removes the reserve field from request info and makes event
crypto metadata type to structure from union to make space for response
info.

App and drivers are updated as per metadata change.

Signed-off-by: Shijith Thotton <sthotton@marvell.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
v3:
* Updated ABI section of release notes.

v2:
* Updated deprecation notice.

v1:
* Rebased.

 app/test/test_event_crypto_adapter.c              | 14 +++++++-------
 doc/guides/rel_notes/deprecation.rst              |  6 ------
 doc/guides/rel_notes/release_21_11.rst            |  2 ++
 drivers/crypto/octeontx/otx_cryptodev_ops.c       |  8 ++++----
 drivers/crypto/octeontx2/otx2_cryptodev_ops.c     |  4 ++--
 .../event/octeontx2/otx2_evdev_crypto_adptr_tx.h  |  4 ++--
 lib/eventdev/rte_event_crypto_adapter.c           |  8 ++++----
 lib/eventdev/rte_event_crypto_adapter.h           | 15 +++++----------
 8 files changed, 26 insertions(+), 35 deletions(-)

diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c
index 3ad20921e2..0d73694d3a 100644
--- a/app/test/test_event_crypto_adapter.c
+++ b/app/test/test_event_crypto_adapter.c
@@ -168,7 +168,7 @@ test_op_forward_mode(uint8_t session_less)
 {
 	struct rte_crypto_sym_xform cipher_xform;
 	struct rte_cryptodev_sym_session *sess;
-	union rte_event_crypto_metadata m_data;
+	struct rte_event_crypto_metadata m_data;
 	struct rte_crypto_sym_op *sym_op;
 	struct rte_crypto_op *op;
 	struct rte_mbuf *m;
@@ -368,7 +368,7 @@ test_op_new_mode(uint8_t session_less)
 {
 	struct rte_crypto_sym_xform cipher_xform;
 	struct rte_cryptodev_sym_session *sess;
-	union rte_event_crypto_metadata m_data;
+	struct rte_event_crypto_metadata m_data;
 	struct rte_crypto_sym_op *sym_op;
 	struct rte_crypto_op *op;
 	struct rte_mbuf *m;
@@ -406,7 +406,7 @@ test_op_new_mode(uint8_t session_less)
 		if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) {
 			/* Fill in private user data information */
 			rte_memcpy(&m_data.response_info, &response_info,
-				   sizeof(m_data));
+				   sizeof(response_info));
 			rte_cryptodev_sym_session_set_user_data(sess,
 						&m_data, sizeof(m_data));
 		}
@@ -426,7 +426,7 @@ test_op_new_mode(uint8_t session_less)
 		op->private_data_offset = len;
 		/* Fill in private data information */
 		rte_memcpy(&m_data.response_info, &response_info,
-			   sizeof(m_data));
+			   sizeof(response_info));
 		rte_memcpy((uint8_t *)op + len, &m_data, sizeof(m_data));
 	}
 
@@ -519,7 +519,7 @@ configure_cryptodev(void)
 			DEFAULT_NUM_XFORMS *
 			sizeof(struct rte_crypto_sym_xform) +
 			MAXIMUM_IV_LENGTH +
-			sizeof(union rte_event_crypto_metadata),
+			sizeof(struct rte_event_crypto_metadata),
 			rte_socket_id());
 	if (params.op_mpool == NULL) {
 		RTE_LOG(ERR, USER1, "Can't create CRYPTO_OP_POOL\n");
@@ -549,12 +549,12 @@ configure_cryptodev(void)
 	 * to include the session headers & private data
 	 */
 	session_size = rte_cryptodev_sym_get_private_session_size(TEST_CDEV_ID);
-	session_size += sizeof(union rte_event_crypto_metadata);
+	session_size += sizeof(struct rte_event_crypto_metadata);
 
 	params.session_mpool = rte_cryptodev_sym_session_pool_create(
 			"CRYPTO_ADAPTER_SESSION_MP",
 			MAX_NB_SESSIONS, 0, 0,
-			sizeof(union rte_event_crypto_metadata),
+			sizeof(struct rte_event_crypto_metadata),
 			SOCKET_ID_ANY);
 	TEST_ASSERT_NOT_NULL(params.session_mpool,
 			"session mempool allocation failed\n");
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 76a4abfd6b..58ee95c020 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -266,12 +266,6 @@ Deprecation Notices
   values to the function ``rte_event_eth_rx_adapter_queue_add`` using
   the structure ``rte_event_eth_rx_adapter_queue_add``.
 
-* eventdev: Reserved bytes of ``rte_event_crypto_request`` is a space holder
-  for ``response_info``. Both should be decoupled for better clarity.
-  New space for ``response_info`` can be made by changing
-  ``rte_event_crypto_metadata`` type to structure from union.
-  This change is targeted for DPDK 21.11.
-
 * metrics: The function ``rte_metrics_init`` will have a non-void return
   in order to notify errors instead of calling ``rte_exit``.
 
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index d707a554ef..ab76d5dd55 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -100,6 +100,8 @@ ABI Changes
    Also, make sure to start the actual text at the margin.
    =======================================================
 
+* eventdev: Modified type of ``union rte_event_crypto_metadata`` to struct and
+  removed reserved bytes from ``struct rte_event_crypto_request``.
 
 Known Issues
 ------------
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index eac6796cfb..c51be63146 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -710,17 +710,17 @@ submit_request_to_sso(struct ssows *ws, uintptr_t req,
 	ssovf_store_pair(add_work, req, ws->grps[rsp_info->queue_id]);
 }
 
-static inline union rte_event_crypto_metadata *
+static inline struct rte_event_crypto_metadata *
 get_event_crypto_mdata(struct rte_crypto_op *op)
 {
-	union rte_event_crypto_metadata *ec_mdata;
+	struct rte_event_crypto_metadata *ec_mdata;
 
 	if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
 		ec_mdata = rte_cryptodev_sym_session_get_user_data(
 							   op->sym->session);
 	else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
 		 op->private_data_offset)
-		ec_mdata = (union rte_event_crypto_metadata *)
+		ec_mdata = (struct rte_event_crypto_metadata *)
 			((uint8_t *)op + op->private_data_offset);
 	else
 		return NULL;
@@ -731,7 +731,7 @@ get_event_crypto_mdata(struct rte_crypto_op *op)
 uint16_t __rte_hot
 otx_crypto_adapter_enqueue(void *port, struct rte_crypto_op *op)
 {
-	union rte_event_crypto_metadata *ec_mdata;
+	struct rte_event_crypto_metadata *ec_mdata;
 	struct cpt_instance *instance;
 	struct cpt_request_info *req;
 	struct rte_event *rsp_info;
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
index 42100154cd..952d1352f4 100644
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
@@ -453,7 +453,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
 		    struct rte_crypto_op *op,
 		    uint64_t cpt_inst_w7)
 {
-	union rte_event_crypto_metadata *m_data;
+	struct rte_event_crypto_metadata *m_data;
 	union cpt_inst_s inst;
 	uint64_t lmt_status;
 
@@ -468,7 +468,7 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
 		}
 	} else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
 		   op->private_data_offset) {
-		m_data = (union rte_event_crypto_metadata *)
+		m_data = (struct rte_event_crypto_metadata *)
 			 ((uint8_t *)op +
 			  op->private_data_offset);
 	} else {
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
index ecf7eb9f56..458e8306d7 100644
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
+++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
@@ -16,7 +16,7 @@
 static inline uint16_t
 otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
 {
-	union rte_event_crypto_metadata *m_data;
+	struct rte_event_crypto_metadata *m_data;
 	struct rte_crypto_op *crypto_op;
 	struct rte_cryptodev *cdev;
 	struct otx2_cpt_qp *qp;
@@ -37,7 +37,7 @@ otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
 		qp_id = m_data->request_info.queue_pair_id;
 	} else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
 		   crypto_op->private_data_offset) {
-		m_data = (union rte_event_crypto_metadata *)
+		m_data = (struct rte_event_crypto_metadata *)
 			 ((uint8_t *)crypto_op +
 			  crypto_op->private_data_offset);
 		cdev_id = m_data->request_info.cdev_id;
diff --git a/lib/eventdev/rte_event_crypto_adapter.c b/lib/eventdev/rte_event_crypto_adapter.c
index e1d38d383d..6977391ae9 100644
--- a/lib/eventdev/rte_event_crypto_adapter.c
+++ b/lib/eventdev/rte_event_crypto_adapter.c
@@ -333,7 +333,7 @@ eca_enq_to_cryptodev(struct rte_event_crypto_adapter *adapter,
 		 struct rte_event *ev, unsigned int cnt)
 {
 	struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats;
-	union rte_event_crypto_metadata *m_data = NULL;
+	struct rte_event_crypto_metadata *m_data = NULL;
 	struct crypto_queue_pair_info *qp_info = NULL;
 	struct rte_crypto_op *crypto_op;
 	unsigned int i, n;
@@ -371,7 +371,7 @@ eca_enq_to_cryptodev(struct rte_event_crypto_adapter *adapter,
 			len++;
 		} else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
 				crypto_op->private_data_offset) {
-			m_data = (union rte_event_crypto_metadata *)
+			m_data = (struct rte_event_crypto_metadata *)
 				 ((uint8_t *)crypto_op +
 					crypto_op->private_data_offset);
 			cdev_id = m_data->request_info.cdev_id;
@@ -504,7 +504,7 @@ eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter,
 		  struct rte_crypto_op **ops, uint16_t num)
 {
 	struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats;
-	union rte_event_crypto_metadata *m_data = NULL;
+	struct rte_event_crypto_metadata *m_data = NULL;
 	uint8_t event_dev_id = adapter->eventdev_id;
 	uint8_t event_port_id = adapter->event_port_id;
 	struct rte_event events[BATCH_SIZE];
@@ -523,7 +523,7 @@ eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter,
 					ops[i]->sym->session);
 		} else if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
 				ops[i]->private_data_offset) {
-			m_data = (union rte_event_crypto_metadata *)
+			m_data = (struct rte_event_crypto_metadata *)
 				 ((uint8_t *)ops[i] +
 				  ops[i]->private_data_offset);
 		}
diff --git a/lib/eventdev/rte_event_crypto_adapter.h b/lib/eventdev/rte_event_crypto_adapter.h
index f8c6cca87c..3c24d9d9df 100644
--- a/lib/eventdev/rte_event_crypto_adapter.h
+++ b/lib/eventdev/rte_event_crypto_adapter.h
@@ -200,11 +200,6 @@ enum rte_event_crypto_adapter_mode {
  * provide event request information to the adapter.
  */
 struct rte_event_crypto_request {
-	uint8_t resv[8];
-	/**< Overlaps with first 8 bytes of struct rte_event
-	 * that encode the response event information. Application
-	 * is expected to fill in struct rte_event response_info.
-	 */
 	uint16_t cdev_id;
 	/**< cryptodev ID to be used */
 	uint16_t queue_pair_id;
@@ -223,16 +218,16 @@ struct rte_event_crypto_request {
  * operation. If the transfer is done by SW, event response information
  * will be used by the adapter.
  */
-union rte_event_crypto_metadata {
-	struct rte_event_crypto_request request_info;
-	/**< Request information to be filled in by application
-	 * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
-	 */
+struct rte_event_crypto_metadata {
 	struct rte_event response_info;
 	/**< Response information to be filled in by application
 	 * for RTE_EVENT_CRYPTO_ADAPTER_OP_NEW and
 	 * RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
 	 */
+	struct rte_event_crypto_request request_info;
+	/**< Request information to be filled in by application
+	 * for RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode.
+	 */
 };
 
 /**
-- 
2.25.1


^ permalink raw reply	[relevance 6%]

* Re: [dpdk-dev] [RFC] eventdev: uninline inline API functions
  2021-08-30 16:00  2% ` [dpdk-dev] [RFC] eventdev: uninline inline API functions Mattias Rönnblom
@ 2021-08-31 12:28  0%   ` Jerin Jacob
  2021-08-31 12:34  0%     ` Mattias Rönnblom
  0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-08-31 12:28 UTC (permalink / raw)
  To: Mattias Rönnblom
  Cc: Jerin Jacob, Pavan Nikhilesh, dpdk-dev, bogdan.tanasa

On Mon, Aug 30, 2021 at 9:30 PM Mattias Rönnblom
<mattias.ronnblom@ericsson.com> wrote:
>
> Replace the inline functions in the eventdev user application API with
> regular non-inline API calls. This allows for a cleaner and more
> simple API/ABI, but might well also cause performance regressions.
>
> The purpose of this RFC patch is to allow for performance testing.
>
> The rte_eventdev struct declaration should be moved off the public
> API.
>
> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>

I think we need to align all DPDK subsystems to a similar scheme.[1]
I see -5% kind of regression-based on workload.

[1]
https://patches.dpdk.org/project/dpdk/patch/20210820162834.12544-2-konstantin.ananyev@intel.com/

> ---
>  drivers/net/octeontx/octeontx_ethdev.h  |  1 +
>  lib/eventdev/rte_event_eth_rx_adapter.h |  1 +
>  lib/eventdev/rte_event_eth_tx_adapter.c | 31 ++++++++
>  lib/eventdev/rte_event_eth_tx_adapter.h | 35 ++-------
>  lib/eventdev/rte_eventdev.c             | 82 +++++++++++++++++++++
>  lib/eventdev/rte_eventdev.h             | 94 +++----------------------
>  lib/eventdev/version.map                |  4 ++
>  7 files changed, 134 insertions(+), 114 deletions(-)
>
> diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
> index b73515de37..9402105fcf 100644
> --- a/drivers/net/octeontx/octeontx_ethdev.h
> +++ b/drivers/net/octeontx/octeontx_ethdev.h
> @@ -9,6 +9,7 @@
>
>  #include <rte_common.h>
>  #include <ethdev_driver.h>
> +#include <eventdev_pmd.h>
>  #include <rte_eventdev.h>
>  #include <rte_mempool.h>
>  #include <rte_memory.h>
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
> index 182dd2e5dd..79f4822fb0 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> @@ -84,6 +84,7 @@ extern "C" {
>  #include <rte_service.h>
>
>  #include "rte_eventdev.h"
> +#include "eventdev_pmd.h"
>
>  #define RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE 32
>
> diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
> index 18c0359db7..74f88e6147 100644
> --- a/lib/eventdev/rte_event_eth_tx_adapter.c
> +++ b/lib/eventdev/rte_event_eth_tx_adapter.c
> @@ -1154,6 +1154,37 @@ rte_event_eth_tx_adapter_start(uint8_t id)
>         return ret;
>  }
>
> +uint16_t
> +rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
> +                                uint8_t port_id,
> +                                struct rte_event ev[],
> +                                uint16_t nb_events,
> +                                const uint8_t flags)
> +{
> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> +
> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> +       if (dev_id >= RTE_EVENT_MAX_DEVS ||
> +               !rte_eventdevs[dev_id].attached) {
> +               rte_errno = EINVAL;
> +               return 0;
> +       }
> +
> +       if (port_id >= dev->data->nb_ports) {
> +               rte_errno = EINVAL;
> +               return 0;
> +       }
> +#endif
> +       rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
> +               nb_events, flags);
> +       if (flags)
> +               return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
> +                                                 ev, nb_events);
> +       else
> +               return dev->txa_enqueue(dev->data->ports[port_id], ev,
> +                                       nb_events);
> +}
> +
>  int
>  rte_event_eth_tx_adapter_stats_get(uint8_t id,
>                                 struct rte_event_eth_tx_adapter_stats *stats)
> diff --git a/lib/eventdev/rte_event_eth_tx_adapter.h b/lib/eventdev/rte_event_eth_tx_adapter.h
> index 8c59547165..3cd65e8a09 100644
> --- a/lib/eventdev/rte_event_eth_tx_adapter.h
> +++ b/lib/eventdev/rte_event_eth_tx_adapter.h
> @@ -79,6 +79,7 @@ extern "C" {
>  #include <rte_mbuf.h>
>
>  #include "rte_eventdev.h"
> +#include "eventdev_pmd.h"
>
>  /**
>   * Adapter configuration structure
> @@ -348,36 +349,12 @@ rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
>   *              one or more events. This error code is only applicable to
>   *              closed systems.
>   */
> -static inline uint16_t
> +uint16_t
>  rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
> -                               uint8_t port_id,
> -                               struct rte_event ev[],
> -                               uint16_t nb_events,
> -                               const uint8_t flags)
> -{
> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> -       if (dev_id >= RTE_EVENT_MAX_DEVS ||
> -               !rte_eventdevs[dev_id].attached) {
> -               rte_errno = EINVAL;
> -               return 0;
> -       }
> -
> -       if (port_id >= dev->data->nb_ports) {
> -               rte_errno = EINVAL;
> -               return 0;
> -       }
> -#endif
> -       rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
> -               nb_events, flags);
> -       if (flags)
> -               return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
> -                                                 ev, nb_events);
> -       else
> -               return dev->txa_enqueue(dev->data->ports[port_id], ev,
> -                                       nb_events);
> -}
> +                                uint8_t port_id,
> +                                struct rte_event ev[],
> +                                uint16_t nb_events,
> +                                const uint8_t flags);
>
>  /**
>   * Retrieve statistics for an adapter
> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
> index 594dd5e759..e2dad8a838 100644
> --- a/lib/eventdev/rte_eventdev.c
> +++ b/lib/eventdev/rte_eventdev.c
> @@ -1119,6 +1119,65 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
>         return count;
>  }
>
> +static __rte_always_inline uint16_t
> +__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> +                       const struct rte_event ev[], uint16_t nb_events,
> +                       const event_enqueue_burst_t fn)
> +{
> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> +
> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> +       if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
> +               rte_errno = EINVAL;
> +               return 0;
> +       }
> +
> +       if (port_id >= dev->data->nb_ports) {
> +               rte_errno = EINVAL;
> +               return 0;
> +       }
> +#endif
> +       rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
> +       /*
> +        * Allow zero cost non burst mode routine invocation if application
> +        * requests nb_events as const one
> +        */
> +       if (nb_events == 1)
> +               return (*dev->enqueue)(dev->data->ports[port_id], ev);
> +       else
> +               return fn(dev->data->ports[port_id], ev, nb_events);
> +}
> +
> +uint16_t
> +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> +                       const struct rte_event ev[], uint16_t nb_events)
> +{
> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> +
> +       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> +                                        dev->enqueue_burst);
> +}
> +
> +uint16_t
> +rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
> +                           const struct rte_event ev[], uint16_t nb_events)
> +{
> +       const struct rte_eventdev *dev = &rte_event_devices[dev_id];
> +
> +       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> +                                        dev->enqueue_new_burst);
> +}
> +
> +uint16_t
> +rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
> +                               const struct rte_event ev[], uint16_t nb_events)
> +{
> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> +
> +       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> +                       dev->enqueue_forward_burst);
> +}
> +
>  int
>  rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
>                                  uint64_t *timeout_ticks)
> @@ -1135,6 +1194,29 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
>         return (*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
>  }
>
> +uint16_t
> +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
> +                       uint16_t nb_events, uint64_t timeout_ticks)
> +{
> +       struct rte_eventdev *dev = &rte_event_devices[dev_id];
> +
> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> +       if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
> +               rte_errno = EINVAL;
> +               return 0;
> +       }
> +
> +       if (port_id >= dev->data->nb_ports) {
> +               rte_errno = EINVAL;
> +               return 0;
> +       }
> +#endif
> +       rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
> +
> +       return (*dev->dequeue_burst)(dev->data->ports[port_id], ev, nb_events,
> +                                    timeout_ticks);
> +}
> +
>  int
>  rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
>  {
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index a9c496fb62..451e9fb0a0 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -1445,38 +1445,6 @@ struct rte_eventdev {
>         void *reserved_ptrs[3];   /**< Reserved for future fields */
>  } __rte_cache_aligned;
>
> -extern struct rte_eventdev *rte_eventdevs;
> -/** @internal The pool of rte_eventdev structures. */
> -
> -static __rte_always_inline uint16_t
> -__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> -                       const struct rte_event ev[], uint16_t nb_events,
> -                       const event_enqueue_burst_t fn)
> -{
> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> -       if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
> -               rte_errno = EINVAL;
> -               return 0;
> -       }
> -
> -       if (port_id >= dev->data->nb_ports) {
> -               rte_errno = EINVAL;
> -               return 0;
> -       }
> -#endif
> -       rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
> -       /*
> -        * Allow zero cost non burst mode routine invocation if application
> -        * requests nb_events as const one
> -        */
> -       if (nb_events == 1)
> -               return (*dev->enqueue)(dev->data->ports[port_id], ev);
> -       else
> -               return fn(dev->data->ports[port_id], ev, nb_events);
> -}
> -
>  /**
>   * Enqueue a burst of events objects or an event object supplied in *rte_event*
>   * structure on an  event device designated by its *dev_id* through the event
> @@ -1520,15 +1488,9 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>   *              closed systems.
>   * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
>   */
> -static inline uint16_t
> +uint16_t
>  rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
> -                       const struct rte_event ev[], uint16_t nb_events)
> -{
> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> -       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> -                       dev->enqueue_burst);
> -}
> +                       const struct rte_event ev[], uint16_t nb_events);
>
>  /**
>   * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_NEW* on
> @@ -1571,15 +1533,9 @@ rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>   * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
>   * @see rte_event_enqueue_burst()
>   */
> -static inline uint16_t
> +uint16_t
>  rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
> -                       const struct rte_event ev[], uint16_t nb_events)
> -{
> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> -       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> -                       dev->enqueue_new_burst);
> -}
> +                           const struct rte_event ev[], uint16_t nb_events);
>
>  /**
>   * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_FORWARD*
> @@ -1622,15 +1578,10 @@ rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
>   * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
>   * @see rte_event_enqueue_burst()
>   */
> -static inline uint16_t
> +uint16_t
>  rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
> -                       const struct rte_event ev[], uint16_t nb_events)
> -{
> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> -       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
> -                       dev->enqueue_forward_burst);
> -}
> +                               const struct rte_event ev[],
> +                               uint16_t nb_events);
>
>  /**
>   * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
> @@ -1727,36 +1678,9 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
>   *
>   * @see rte_event_port_dequeue_depth()
>   */
> -static inline uint16_t
> +uint16_t
>  rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
> -                       uint16_t nb_events, uint64_t timeout_ticks)
> -{
> -       struct rte_eventdev *dev = &rte_eventdevs[dev_id];
> -
> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
> -       if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
> -               rte_errno = EINVAL;
> -               return 0;
> -       }
> -
> -       if (port_id >= dev->data->nb_ports) {
> -               rte_errno = EINVAL;
> -               return 0;
> -       }
> -#endif
> -       rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
> -       /*
> -        * Allow zero cost non burst mode routine invocation if application
> -        * requests nb_events as const one
> -        */
> -       if (nb_events == 1)
> -               return (*dev->dequeue)(
> -                       dev->data->ports[port_id], ev, timeout_ticks);
> -       else
> -               return (*dev->dequeue_burst)(
> -                       dev->data->ports[port_id], ev, nb_events,
> -                               timeout_ticks);
> -}
> +                       uint16_t nb_events, uint64_t timeout_ticks);
>
>  /**
>   * Link multiple source event queues supplied in *queues* to the destination
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index 88625621ec..8da79cbdc0 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -13,7 +13,11 @@ DPDK_22 {
>         rte_event_crypto_adapter_stats_get;
>         rte_event_crypto_adapter_stats_reset;
>         rte_event_crypto_adapter_stop;
> +       rte_event_enqueue_burst;
> +       rte_event_enqueue_new_burst;
> +       rte_event_enqueue_forward_burst;
>         rte_event_dequeue_timeout_ticks;
> +       rte_event_dequeue_burst;
>         rte_event_dev_attr_get;
>         rte_event_dev_close;
>         rte_event_dev_configure;
> --
> 2.17.1
>

^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [RFC] eventdev: uninline inline API functions
  2021-08-31 12:28  0%   ` Jerin Jacob
@ 2021-08-31 12:34  0%     ` Mattias Rönnblom
  0 siblings, 0 replies; 200+ results
From: Mattias Rönnblom @ 2021-08-31 12:34 UTC (permalink / raw)
  To: Jerin Jacob; +Cc: Jerin Jacob, Pavan Nikhilesh, dpdk-dev, Bogdan Tanasa

On 2021-08-31 14:28, Jerin Jacob wrote:
> On Mon, Aug 30, 2021 at 9:30 PM Mattias Rönnblom
> <mattias.ronnblom@ericsson.com> wrote:
>> Replace the inline functions in the eventdev user application API with
>> regular non-inline API calls. This allows for a cleaner and more
>> simple API/ABI, but might well also cause performance regressions.
>>
>> The purpose of this RFC patch is to allow for performance testing.
>>
>> The rte_eventdev struct declaration should be moved off the public
>> API.
>>
>> Signed-off-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
> I think we need to align all DPDK subsystems to a similar scheme.[1]


That makes perfect sense.


> I see -5% kind of regression-based on workload.
>
> [1]
> https://protect2.fireeye.com/v1/url?k=50900b74-0f0b325f-50904bef-866038973a15-e3d1ddd2888a6b58&q=1&e=96184cd6-f4b1-4ed9-98a3-ae921c747c7e&u=https%3A%2F%2Fpatches.dpdk.org%2Fproject%2Fdpdk%2Fpatch%2F20210820162834.12544-2-konstantin.ananyev%40intel.com%2F
>
>> ---
>>   drivers/net/octeontx/octeontx_ethdev.h  |  1 +
>>   lib/eventdev/rte_event_eth_rx_adapter.h |  1 +
>>   lib/eventdev/rte_event_eth_tx_adapter.c | 31 ++++++++
>>   lib/eventdev/rte_event_eth_tx_adapter.h | 35 ++-------
>>   lib/eventdev/rte_eventdev.c             | 82 +++++++++++++++++++++
>>   lib/eventdev/rte_eventdev.h             | 94 +++----------------------
>>   lib/eventdev/version.map                |  4 ++
>>   7 files changed, 134 insertions(+), 114 deletions(-)
>>
>> diff --git a/drivers/net/octeontx/octeontx_ethdev.h b/drivers/net/octeontx/octeontx_ethdev.h
>> index b73515de37..9402105fcf 100644
>> --- a/drivers/net/octeontx/octeontx_ethdev.h
>> +++ b/drivers/net/octeontx/octeontx_ethdev.h
>> @@ -9,6 +9,7 @@
>>
>>   #include <rte_common.h>
>>   #include <ethdev_driver.h>
>> +#include <eventdev_pmd.h>
>>   #include <rte_eventdev.h>
>>   #include <rte_mempool.h>
>>   #include <rte_memory.h>
>> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
>> index 182dd2e5dd..79f4822fb0 100644
>> --- a/lib/eventdev/rte_event_eth_rx_adapter.h
>> +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
>> @@ -84,6 +84,7 @@ extern "C" {
>>   #include <rte_service.h>
>>
>>   #include "rte_eventdev.h"
>> +#include "eventdev_pmd.h"
>>
>>   #define RTE_EVENT_ETH_RX_ADAPTER_MAX_INSTANCE 32
>>
>> diff --git a/lib/eventdev/rte_event_eth_tx_adapter.c b/lib/eventdev/rte_event_eth_tx_adapter.c
>> index 18c0359db7..74f88e6147 100644
>> --- a/lib/eventdev/rte_event_eth_tx_adapter.c
>> +++ b/lib/eventdev/rte_event_eth_tx_adapter.c
>> @@ -1154,6 +1154,37 @@ rte_event_eth_tx_adapter_start(uint8_t id)
>>          return ret;
>>   }
>>
>> +uint16_t
>> +rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
>> +                                uint8_t port_id,
>> +                                struct rte_event ev[],
>> +                                uint16_t nb_events,
>> +                                const uint8_t flags)
>> +{
>> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> +
>> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> +       if (dev_id >= RTE_EVENT_MAX_DEVS ||
>> +               !rte_eventdevs[dev_id].attached) {
>> +               rte_errno = EINVAL;
>> +               return 0;
>> +       }
>> +
>> +       if (port_id >= dev->data->nb_ports) {
>> +               rte_errno = EINVAL;
>> +               return 0;
>> +       }
>> +#endif
>> +       rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
>> +               nb_events, flags);
>> +       if (flags)
>> +               return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
>> +                                                 ev, nb_events);
>> +       else
>> +               return dev->txa_enqueue(dev->data->ports[port_id], ev,
>> +                                       nb_events);
>> +}
>> +
>>   int
>>   rte_event_eth_tx_adapter_stats_get(uint8_t id,
>>                                  struct rte_event_eth_tx_adapter_stats *stats)
>> diff --git a/lib/eventdev/rte_event_eth_tx_adapter.h b/lib/eventdev/rte_event_eth_tx_adapter.h
>> index 8c59547165..3cd65e8a09 100644
>> --- a/lib/eventdev/rte_event_eth_tx_adapter.h
>> +++ b/lib/eventdev/rte_event_eth_tx_adapter.h
>> @@ -79,6 +79,7 @@ extern "C" {
>>   #include <rte_mbuf.h>
>>
>>   #include "rte_eventdev.h"
>> +#include "eventdev_pmd.h"
>>
>>   /**
>>    * Adapter configuration structure
>> @@ -348,36 +349,12 @@ rte_event_eth_tx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
>>    *              one or more events. This error code is only applicable to
>>    *              closed systems.
>>    */
>> -static inline uint16_t
>> +uint16_t
>>   rte_event_eth_tx_adapter_enqueue(uint8_t dev_id,
>> -                               uint8_t port_id,
>> -                               struct rte_event ev[],
>> -                               uint16_t nb_events,
>> -                               const uint8_t flags)
>> -{
>> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> -       if (dev_id >= RTE_EVENT_MAX_DEVS ||
>> -               !rte_eventdevs[dev_id].attached) {
>> -               rte_errno = EINVAL;
>> -               return 0;
>> -       }
>> -
>> -       if (port_id >= dev->data->nb_ports) {
>> -               rte_errno = EINVAL;
>> -               return 0;
>> -       }
>> -#endif
>> -       rte_eventdev_trace_eth_tx_adapter_enqueue(dev_id, port_id, ev,
>> -               nb_events, flags);
>> -       if (flags)
>> -               return dev->txa_enqueue_same_dest(dev->data->ports[port_id],
>> -                                                 ev, nb_events);
>> -       else
>> -               return dev->txa_enqueue(dev->data->ports[port_id], ev,
>> -                                       nb_events);
>> -}
>> +                                uint8_t port_id,
>> +                                struct rte_event ev[],
>> +                                uint16_t nb_events,
>> +                                const uint8_t flags);
>>
>>   /**
>>    * Retrieve statistics for an adapter
>> diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c
>> index 594dd5e759..e2dad8a838 100644
>> --- a/lib/eventdev/rte_eventdev.c
>> +++ b/lib/eventdev/rte_eventdev.c
>> @@ -1119,6 +1119,65 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id,
>>          return count;
>>   }
>>
>> +static __rte_always_inline uint16_t
>> +__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> +                       const struct rte_event ev[], uint16_t nb_events,
>> +                       const event_enqueue_burst_t fn)
>> +{
>> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> +
>> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> +       if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
>> +               rte_errno = EINVAL;
>> +               return 0;
>> +       }
>> +
>> +       if (port_id >= dev->data->nb_ports) {
>> +               rte_errno = EINVAL;
>> +               return 0;
>> +       }
>> +#endif
>> +       rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
>> +       /*
>> +        * Allow zero cost non burst mode routine invocation if application
>> +        * requests nb_events as const one
>> +        */
>> +       if (nb_events == 1)
>> +               return (*dev->enqueue)(dev->data->ports[port_id], ev);
>> +       else
>> +               return fn(dev->data->ports[port_id], ev, nb_events);
>> +}
>> +
>> +uint16_t
>> +rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> +                       const struct rte_event ev[], uint16_t nb_events)
>> +{
>> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> +
>> +       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> +                                        dev->enqueue_burst);
>> +}
>> +
>> +uint16_t
>> +rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
>> +                           const struct rte_event ev[], uint16_t nb_events)
>> +{
>> +       const struct rte_eventdev *dev = &rte_event_devices[dev_id];
>> +
>> +       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> +                                        dev->enqueue_new_burst);
>> +}
>> +
>> +uint16_t
>> +rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
>> +                               const struct rte_event ev[], uint16_t nb_events)
>> +{
>> +       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> +
>> +       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> +                       dev->enqueue_forward_burst);
>> +}
>> +
>>   int
>>   rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
>>                                   uint64_t *timeout_ticks)
>> @@ -1135,6 +1194,29 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
>>          return (*dev->dev_ops->timeout_ticks)(dev, ns, timeout_ticks);
>>   }
>>
>> +uint16_t
>> +rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
>> +                       uint16_t nb_events, uint64_t timeout_ticks)
>> +{
>> +       struct rte_eventdev *dev = &rte_event_devices[dev_id];
>> +
>> +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> +       if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
>> +               rte_errno = EINVAL;
>> +               return 0;
>> +       }
>> +
>> +       if (port_id >= dev->data->nb_ports) {
>> +               rte_errno = EINVAL;
>> +               return 0;
>> +       }
>> +#endif
>> +       rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
>> +
>> +       return (*dev->dequeue_burst)(dev->data->ports[port_id], ev, nb_events,
>> +                                    timeout_ticks);
>> +}
>> +
>>   int
>>   rte_event_dev_service_id_get(uint8_t dev_id, uint32_t *service_id)
>>   {
>> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
>> index a9c496fb62..451e9fb0a0 100644
>> --- a/lib/eventdev/rte_eventdev.h
>> +++ b/lib/eventdev/rte_eventdev.h
>> @@ -1445,38 +1445,6 @@ struct rte_eventdev {
>>          void *reserved_ptrs[3];   /**< Reserved for future fields */
>>   } __rte_cache_aligned;
>>
>> -extern struct rte_eventdev *rte_eventdevs;
>> -/** @internal The pool of rte_eventdev structures. */
>> -
>> -static __rte_always_inline uint16_t
>> -__rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> -                       const struct rte_event ev[], uint16_t nb_events,
>> -                       const event_enqueue_burst_t fn)
>> -{
>> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> -       if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
>> -               rte_errno = EINVAL;
>> -               return 0;
>> -       }
>> -
>> -       if (port_id >= dev->data->nb_ports) {
>> -               rte_errno = EINVAL;
>> -               return 0;
>> -       }
>> -#endif
>> -       rte_eventdev_trace_enq_burst(dev_id, port_id, ev, nb_events, fn);
>> -       /*
>> -        * Allow zero cost non burst mode routine invocation if application
>> -        * requests nb_events as const one
>> -        */
>> -       if (nb_events == 1)
>> -               return (*dev->enqueue)(dev->data->ports[port_id], ev);
>> -       else
>> -               return fn(dev->data->ports[port_id], ev, nb_events);
>> -}
>> -
>>   /**
>>    * Enqueue a burst of events objects or an event object supplied in *rte_event*
>>    * structure on an  event device designated by its *dev_id* through the event
>> @@ -1520,15 +1488,9 @@ __rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>>    *              closed systems.
>>    * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
>>    */
>> -static inline uint16_t
>> +uint16_t
>>   rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>> -                       const struct rte_event ev[], uint16_t nb_events)
>> -{
>> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> -       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> -                       dev->enqueue_burst);
>> -}
>> +                       const struct rte_event ev[], uint16_t nb_events);
>>
>>   /**
>>    * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_NEW* on
>> @@ -1571,15 +1533,9 @@ rte_event_enqueue_burst(uint8_t dev_id, uint8_t port_id,
>>    * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
>>    * @see rte_event_enqueue_burst()
>>    */
>> -static inline uint16_t
>> +uint16_t
>>   rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
>> -                       const struct rte_event ev[], uint16_t nb_events)
>> -{
>> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> -       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> -                       dev->enqueue_new_burst);
>> -}
>> +                           const struct rte_event ev[], uint16_t nb_events);
>>
>>   /**
>>    * Enqueue a burst of events objects of operation type *RTE_EVENT_OP_FORWARD*
>> @@ -1622,15 +1578,10 @@ rte_event_enqueue_new_burst(uint8_t dev_id, uint8_t port_id,
>>    * @see rte_event_port_attr_get(), RTE_EVENT_PORT_ATTR_ENQ_DEPTH
>>    * @see rte_event_enqueue_burst()
>>    */
>> -static inline uint16_t
>> +uint16_t
>>   rte_event_enqueue_forward_burst(uint8_t dev_id, uint8_t port_id,
>> -                       const struct rte_event ev[], uint16_t nb_events)
>> -{
>> -       const struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> -       return __rte_event_enqueue_burst(dev_id, port_id, ev, nb_events,
>> -                       dev->enqueue_forward_burst);
>> -}
>> +                               const struct rte_event ev[],
>> +                               uint16_t nb_events);
>>
>>   /**
>>    * Converts nanoseconds to *timeout_ticks* value for rte_event_dequeue_burst()
>> @@ -1727,36 +1678,9 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns,
>>    *
>>    * @see rte_event_port_dequeue_depth()
>>    */
>> -static inline uint16_t
>> +uint16_t
>>   rte_event_dequeue_burst(uint8_t dev_id, uint8_t port_id, struct rte_event ev[],
>> -                       uint16_t nb_events, uint64_t timeout_ticks)
>> -{
>> -       struct rte_eventdev *dev = &rte_eventdevs[dev_id];
>> -
>> -#ifdef RTE_LIBRTE_EVENTDEV_DEBUG
>> -       if (dev_id >= RTE_EVENT_MAX_DEVS || !rte_eventdevs[dev_id].attached) {
>> -               rte_errno = EINVAL;
>> -               return 0;
>> -       }
>> -
>> -       if (port_id >= dev->data->nb_ports) {
>> -               rte_errno = EINVAL;
>> -               return 0;
>> -       }
>> -#endif
>> -       rte_eventdev_trace_deq_burst(dev_id, port_id, ev, nb_events);
>> -       /*
>> -        * Allow zero cost non burst mode routine invocation if application
>> -        * requests nb_events as const one
>> -        */
>> -       if (nb_events == 1)
>> -               return (*dev->dequeue)(
>> -                       dev->data->ports[port_id], ev, timeout_ticks);
>> -       else
>> -               return (*dev->dequeue_burst)(
>> -                       dev->data->ports[port_id], ev, nb_events,
>> -                               timeout_ticks);
>> -}
>> +                       uint16_t nb_events, uint64_t timeout_ticks);
>>
>>   /**
>>    * Link multiple source event queues supplied in *queues* to the destination
>> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
>> index 88625621ec..8da79cbdc0 100644
>> --- a/lib/eventdev/version.map
>> +++ b/lib/eventdev/version.map
>> @@ -13,7 +13,11 @@ DPDK_22 {
>>          rte_event_crypto_adapter_stats_get;
>>          rte_event_crypto_adapter_stats_reset;
>>          rte_event_crypto_adapter_stop;
>> +       rte_event_enqueue_burst;
>> +       rte_event_enqueue_new_burst;
>> +       rte_event_enqueue_forward_burst;
>>          rte_event_dequeue_timeout_ticks;
>> +       rte_event_dequeue_burst;
>>          rte_event_dev_attr_get;
>>          rte_event_dev_close;
>>          rte_event_dev_configure;
>> --
>> 2.17.1
>>


^ permalink raw reply	[relevance 0%]

* Re: [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping
  2021-08-26 10:14  0%         ` Burakov, Anatoly
@ 2021-08-31 13:42  0%           ` Ding, Xuan
  0 siblings, 0 replies; 200+ results
From: Ding, Xuan @ 2021-08-31 13:42 UTC (permalink / raw)
  To: Burakov, Anatoly, Richardson, Bruce
  Cc: Yigit, Ferruh, maxime.coquelin, dev, Xia, Chenbo, Hu, Jiayu,
	techboard, David Marchand

Hi,

> -----Original Message-----
> From: Burakov, Anatoly <anatoly.burakov@intel.com>
> Sent: Thursday, August 26, 2021 6:15 PM
> To: Richardson, Bruce <bruce.richardson@intel.com>
> Cc: Yigit, Ferruh <ferruh.yigit@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; dev@dpdk.org; maxime.coquelin@redhat.com; Xia,
> Chenbo <chenbo.xia@intel.com>; Hu, Jiayu <jiayu.hu@intel.com>;
> techboard@dpdk.org; David Marchand <david.marchand@redhat.com>
> Subject: Re: [PATCH] doc: announce change in dma mapping/unmapping
> 
> On 26-Aug-21 11:09 AM, Bruce Richardson wrote:
> > On Thu, Aug 26, 2021 at 10:46:07AM +0100, Burakov, Anatoly wrote:
> >> On 26-Aug-21 10:29 AM, Ferruh Yigit wrote:
> >>> On 8/25/2021 12:47 PM, Burakov, Anatoly wrote:
> >>>> On 25-Aug-21 12:27 PM, Xuan Ding wrote:
> >>>>> Currently, the VFIO subsystem will compact adjacent DMA regions for
> the
> >>>>> purposes of saving space in the internal list of mappings. This has a
> >>>>> side effect of compacting two separate mappings that just happen to
> be
> >>>>> adjacent in memory. Since VFIO implementation on IA platforms also
> does
> >>>>> not allow partial unmapping of memory mapped for DMA, the current
> DPDK
> >>>>> VFIO implementation will prevent unmapping of accidentally adjacent
> >>>>> maps even though it could have been unmapped [1].
> >>>>>
> >>>>> The proper fix for this issue is to change the VFIO DMA mapping API
> to
> >>>>> also include page size, and always map memory page-by-page.
> >>>>>
> >>>>> [1] https://mails.dpdk.org/archives/dev/2021-July/213493.html
> >>>>>
> >>>>> Signed-off-by: Xuan Ding <xuan.ding@intel.com>
> >>>>> ---
> >>>>>     doc/guides/rel_notes/deprecation.rst | 3 +++
> >>>>>     1 file changed, 3 insertions(+)
> >>>>>
> >>>>> diff --git a/doc/guides/rel_notes/deprecation.rst
> >>>>> b/doc/guides/rel_notes/deprecation.rst
> >>>>> index 76a4abfd6b..272ffa993e 100644
> >>>>> --- a/doc/guides/rel_notes/deprecation.rst
> >>>>> +++ b/doc/guides/rel_notes/deprecation.rst
> >>>>> @@ -287,3 +287,6 @@ Deprecation Notices
> >>>>>       reserved bytes to 2 (from 3), and use 1 byte to indicate warnings
> and other
> >>>>>       information from the crypto/security operation. This field will be
> used to
> >>>>>       communicate events such as soft expiry with IPsec in lookaside
> mode.
> >>>>> +
> >>>>> +  * vfio: the functions `rte_vfio_container_dma_map` and
> >>>>> `rte_vfio_container_dma_unmap`
> >>>>> +  will be amended to include page size. This change is targeted for
> DPDK 21.11.
> >>>>>
> >>>>
> >>>> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
> >>>>
> >>>
> >>> Techboard decision was to add a new API, instead of updating existing
> ones, to
> >>> not break the apps using this API.
> >>>
> >>> @Xuan, @Anatoly, can you please confirm if this will solve your problem?
> >>>
> >>
> >> I don't think adding a new API is a particularly good solution. The "new"
> >> API will be almost exactly as the old one, but adding one parameter. I
> don't
> >> expect code duplication to be an issue, but having two API's that do the
> >> same thing seems like it's rife for potential confusion.
> >>
> > Well, if one API is marked as deprecated, then there will be no confusion
> > for users, since using the wrong one will give a warning pointing to the
> > right one.
> >
> >> If we add a new API, we can then either remove the old API entirely in
> >> 22.11 (effectively renaming it), or we remove the new API in 22.11 and
> >> rename it back to the old function name. I don't think neither of these
> >> is a good solution, as we risk introducing more users for the API that
> >> will later change.
> > The new API will not be renamed to the old one, since that would break
> apps
> > using it without proper deprecation process. Removing the old one alone
> > would be the approach to be used, but it would be correctly following the
> > deprecation process and giving users at least 1 year, if no 2, of notice
> > about the change.
> >
> >>
> >> I think the pain of updating current software for 21.11 (while keeping
> >> compatibility with 20.11 ABI!) is going to happen regardless, and whether
> we
> >> decide to add a "temporary" new API or permanently rename the old one.
> It's
> >> (in my opinion) easier to just bite the bullet and update the function in
> >> 21.11.
> > I fail to see the issue with adding a new function. Whether we add a new
> > function or add a parameter to the existing one, code will have to change
> > either way. The advantage of the former scheme, adding the new function,
> is
> > that it shows that we are serious about our ABI/API compatibility process,
> > and are not lax about passing exceptions when other options are available.
> >
> >>
> >> However, if the tech board feels like adding a new API is a good solution,
> >> then okay, but we need to flesh out roadmap a bit better. Do we rename
> the
> >> old API, or do we add a temporary new API?
> >
> > New API added, old API deprecated. In future old API goes away leaving
> new
> > API as the only option.
> >
> > /Bruce
> >
> 
> Okay, so it's settled then. I revoke my ack for this patch, and we need
> a new deprecation notice.

A new depreciation notice was sent [1], targeting for API change in DPDK-22.02.
For the unmapping issue mentioned before, we developed a compromised solution
to optimize the partial unmap logic in DPDK-21.11, and it is compatible with current
API.

[1] https://mails.dpdk.org/archives/dev/2021-August/217802.html

Thanks for your suggestion and support!

Regards,
Xuan
> 
> --
> Thanks,
> Anatoly

^ permalink raw reply	[relevance 0%]

* [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols
                     ` (3 preceding siblings ...)
  @ 2021-08-31 14:50  3% ` Ray Kinsella
  4 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2021-08-31 14:50 UTC (permalink / raw)
  To: dev
  Cc: bruce.richardson, stephen, ferruh.yigit, thomas, ktraynor, mdr, aconole

Scripts to count and track the lifecycle of DPDK symbols.

The symbol-tool script reports on the growth of symbols over releases
and list expired symbols. The notify-symbol-maintainers script
consumes the input from symbol-tool and generates email notifications
of expired symbols.

v2: reworked to fix pylint errors
v3: sent with the correct in-reply-to
v4: fix typos picked up by the CI
v5: fix terminal_size & directory args
v6: added list-expired, to list expired experimental symbols
v7: fix typo in comments
v8: added tool to notify maintainers of expired symbols
v9: removed hardcoded emails addressed and script names
v10: added ability to identify and notify the original contributors

Ray Kinsella (3):
  devtools: script to track symbols over releases
  devtools: script to send notifications of expired symbols
  maintainers: add new abi scripts

 MAINTAINERS                           |   2 +
 devtools/notify-symbol-maintainers.py | 256 ++++++++++++++
 devtools/symbol-tool.py               | 482 ++++++++++++++++++++++++++
 3 files changed, 740 insertions(+)
 create mode 100755 devtools/notify-symbol-maintainers.py
 create mode 100755 devtools/symbol-tool.py

-- 
2.26.2


^ permalink raw reply	[relevance 3%]

Results 10201-10400 of ~18000   |  | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2021-01-10 18:44     [dpdk-dev] [PATCH] doc: add release milestones definition Asaf Penso
2021-08-26 10:11  5% ` [dpdk-dev] [PATCH v8] " Thomas Monjalon
2021-02-25 17:02     [dpdk-dev] [PATCH 1/7] common/octeontx: enable build only on 64bit Linux pbhagavatula
2021-03-25 14:52     ` [dpdk-dev] [PATCH 21.11 v2 0/3] octeontx build only on 64-bit Linux Thomas Monjalon
2021-08-17  8:46  0%   ` David Marchand
2021-04-12 21:53     [dpdk-dev] [PATCH] devtools: test different build types Thomas Monjalon
2021-05-21 15:03     ` David Marchand
2021-07-23 20:26  0%   ` Andrew Rybchenko
2021-08-02 22:45 23% ` [dpdk-dev] [PATCH v2] " Thomas Monjalon
2021-08-08 12:51     ` [dpdk-dev] [PATCH v3 0/5] more build tests Thomas Monjalon
2021-08-08 12:51 23%   ` [dpdk-dev] [PATCH v3 5/5] devtools: test different build types Thomas Monjalon
2021-05-20 18:42     [dpdk-dev] [PATCH v3] doc: announce API changes for Windows compatibility Dmitry Kozlyuk
2021-07-21 19:55     ` Dmitry Kozlyuk
2021-07-21 19:55       ` [dpdk-dev] [PATCH v4] " Dmitry Kozlyuk
2021-08-02 12:13         ` Thomas Monjalon
2021-08-02 12:45           ` [dpdk-dev] [EXT] " Akhil Goyal
2021-08-02 13:00  3%         ` Dmitry Kozlyuk
2021-08-02 13:48  0%           ` Akhil Goyal
2021-08-02 14:57  0%             ` Tal Shnaiderman
2021-08-02 17:46  0%             ` Thomas Monjalon
2021-06-01  1:56     [dpdk-dev] [PATCH v1 0/2] relative path support for ABI compatibility check Feifei Wang
2021-06-01  1:56     ` [dpdk-dev] [PATCH v1 2/2] devtools: use absolute path for the build directory Feifei Wang
2021-07-28  7:20  0%   ` [dpdk-dev] 回复: " Feifei Wang
2021-08-11  6:17  8% ` [dpdk-dev] [PATCH v2 0/1] relative path support for ABI compatibility check Feifei Wang
2021-08-11  6:17 17%   ` [dpdk-dev] [PATCH v2 1/1] devtools: add " Feifei Wang
2021-06-01  8:41     [dpdk-dev] [PATCH] doc: announce removal of ABIs in PCI bus driver Chenbo Xia
2021-07-23  7:39  3% ` Xia, Chenbo
2021-07-23 12:46  3%   ` Ferruh Yigit
2021-07-26  5:56  0%     ` Xia, Chenbo
2021-07-27  8:44  0%       ` Bruce Richardson
2021-07-28 15:32  0%         ` Andrew Rybchenko
2021-07-31 20:44  0%         ` Thomas Monjalon
2021-08-03  1:52  0%           ` Xia, Chenbo
2021-08-03  8:19  0%             ` Thomas Monjalon
2021-07-27 10:58  0% ` Ananyev, Konstantin
2021-06-18 16:36     [dpdk-dev] [PATCH] devtools: script to track map symbols Ray Kinsella
2021-08-04 16:23  5% ` [dpdk-dev] [PATCH v6] " Ray Kinsella
2021-08-04 16:27  5% ` [dpdk-dev] [PATCH v7] " Ray Kinsella
2021-08-06 17:54     ` [dpdk-dev] [PATCH v8 0/2] devtools: scripts to count and track symbols Ray Kinsella
2021-08-06 17:54  5%   ` [dpdk-dev] [PATCH v8 1/2] devtools: script to track map symbols Ray Kinsella
2021-08-06 17:54  5%   ` [dpdk-dev] [PATCH v8 2/2] devtools: script to send notifications of expired symbols Ray Kinsella
2021-08-09 12:53     ` [dpdk-dev] [PATCH v9 0/2] devtools: scripts to count and track symbols Ray Kinsella
2021-08-09 12:53  5%   ` [dpdk-dev] [PATCH v9 1/2] devtools: script to track symbols over releases Ray Kinsella
2021-08-09 12:53  5%   ` [dpdk-dev] [PATCH v9 2/2] devtools: script to send notifications of expired symbols Ray Kinsella
2021-08-31 14:50  3% ` [dpdk-dev] [PATCH v10 0/3] devtools: scripts to count and track symbols Ray Kinsella
2021-06-18 21:26     [dpdk-dev] [PATCH v10 0/9] eal: Add EAL API for threading Narcisa Ana Maria Vasile
2021-07-30 22:31  3% ` [dpdk-dev] [PATCH v11 00/10] " Narcisa Ana Maria Vasile
2021-08-02 17:32  3%   ` [dpdk-dev] [PATCH v12 " Narcisa Ana Maria Vasile
2021-08-03 19:01  3%     ` [dpdk-dev] [PATCH v13 " Narcisa Ana Maria Vasile
2021-08-19 21:31  3%       ` [dpdk-dev] [PATCH v14 0/9] " Narcisa Ana Maria Vasile
2021-06-19  1:57     [dpdk-dev] [PATCH v2 0/6] Enable the internal EAL thread API Narcisa Ana Maria Vasile
2021-08-18 13:44  4% ` [dpdk-dev] [PATCH v3 " Narcisa Ana Maria Vasile
2021-08-18 13:44  4%   ` [dpdk-dev] [PATCH v3 2/6] eal: add function for control thread creation Narcisa Ana Maria Vasile
2021-06-21  7:35     [dpdk-dev] [RFC PATCH v3 0/3] Add PIE support for HQoS library Liguzinski, WojciechX
2021-07-05  8:04     ` [dpdk-dev] [RFC PATCH v4 " Liguzinski, WojciechX
2021-07-16 12:46  0%   ` Dumitrescu, Cristian
2021-06-22 16:48     [dpdk-dev] [PATCH 0/2] OCTEONTX crypto adapter support Shijith Thotton
2021-06-23 20:53     ` [dpdk-dev] [PATCH v2 " Shijith Thotton
2021-06-23 20:53       ` [dpdk-dev] [PATCH v2 1/2] drivers: add octeontx crypto adapter framework Shijith Thotton
2021-07-15 14:21         ` David Marchand
2021-07-16  8:39           ` [dpdk-dev] [EXT] " Akhil Goyal
2021-07-20 11:58  3%         ` Akhil Goyal
2021-07-20 12:14  0%           ` David Marchand
2021-07-21  9:44  3%             ` Thomas Monjalon
2021-07-21 15:11  4%               ` Brandon Lo
2021-07-22  7:45  0%               ` Akhil Goyal
2021-07-22  9:06  3%                 ` [dpdk-dev] [PATCH] crypto/octeontx: enable build on non Linux OS Shijith Thotton
2021-07-22  9:17  0%                   ` Akhil Goyal
2021-07-22 19:06  0%                     ` Thomas Monjalon
2021-07-22 19:08  3%                       ` Thomas Monjalon
2021-07-22 20:20  3%                         ` Brandon Lo
2021-07-22 20:32  0%                           ` Thomas Monjalon
2021-06-23  0:03     [dpdk-dev] [PATCH v5 2/2] bus/auxiliary: introduce auxiliary bus Xueming Li
2021-06-25 11:47     ` [dpdk-dev] [PATCH v6 " Xueming Li
2021-08-04 10:00       ` Kinsella, Ray
     [not found]         ` <DM4PR12MB5373DBD9E73E5E0E8505C129A1F19@DM4PR12MB5373.namprd12.prod.outlook.com>
     [not found]           ` <97d5d1b3-40c3-09ac-2978-83c984b30af0@ashroe.eu>
     [not found]             ` <DM4PR12MB53736410D2C07101F872363EA1F19@DM4PR12MB5373.namprd12.prod.outlook.com>
2021-08-04 12:14  3%           ` Kinsella, Ray
2021-08-04 13:00  3%             ` Xueming(Steven) Li
2021-08-04 13:12  5%               ` Thomas Monjalon
2021-08-04 13:53  0%                 ` Kinsella, Ray
2021-08-04 14:13  4%                   ` Thomas Monjalon
2021-06-29 13:46     [dpdk-dev] [PATCH] ethdev: add namespace Ferruh Yigit
2021-08-27  1:19  1% ` [dpdk-dev] [PATCH v2] " Ferruh Yigit
2021-08-30 17:19  1%   ` [dpdk-dev] [PATCH v3] " Ferruh Yigit
2021-06-29 16:00     [dpdk-dev] [PATCH v1] doc: policy on promotion of experimental APIs Ray Kinsella
2021-07-01 10:38     ` [dpdk-dev] [PATCH v3] doc: policy on the " Ray Kinsella
2021-07-09  6:16       ` Jerin Jacob
2021-07-09 19:15         ` Tyler Retzlaff
2021-07-11  7:22           ` Jerin Jacob
2021-08-03 14:12  3%         ` Kinsella, Ray
2021-08-03 16:44 23% ` [dpdk-dev] [PATCH v4] " Ray Kinsella
2021-08-04  9:34 23% ` [dpdk-dev] [PATCH v5] " Ray Kinsella
2021-08-04 10:39  3%   ` Thomas Monjalon
2021-08-04 11:49  0%     ` Kinsella, Ray
2021-07-02 13:18     [dpdk-dev] [PATCH] dmadev: introduce DMA device library Chengwen Feng
2021-07-19  3:29     ` [dpdk-dev] [PATCH v6] " Chengwen Feng
2021-07-19  6:21  3%   ` Jerin Jacob
2021-07-12  8:02     [dpdk-dev] [PATCH v1] doc: update atomic operation deprecation Joyce Kong
2021-07-17 18:47  0% ` Honnappa Nagarahalli
2021-07-23  9:49  4% ` [dpdk-dev] [PATCH v2] " Joyce Kong
2021-07-12 16:17     [dpdk-dev] [PATCH] ethdev: fix representor port ID search by name Andrew Rybchenko
2021-07-19  6:58  0% ` Xueming(Steven) Li
2021-07-19  8:46  0%   ` Andrew Rybchenko
2021-07-19 11:54  0%     ` Xueming(Steven) Li
2021-07-19 12:36  0%       ` Andrew Rybchenko
2021-07-19 12:50  0%         ` Xueming(Steven) Li
2021-07-20  8:59  0%           ` Andrew Rybchenko
2021-07-29  4:13  0%             ` Xueming(Steven) Li
2021-08-01  8:40  0%               ` Andrew Rybchenko
2021-08-01 14:25  0%                 ` Xueming(Steven) Li
2021-07-29  4:20  0% ` Xueming(Steven) Li
2021-08-01  8:50  0%   ` Andrew Rybchenko
2021-08-01 14:15  0%     ` Xueming(Steven) Li
2021-08-18 14:00  3% ` [dpdk-dev] [PATCH v2] " Andrew Rybchenko
2021-08-27  9:18  0%   ` Xueming(Steven) Li
2021-08-20 12:18  3% ` [dpdk-dev] [PATCH v3] " Andrew Rybchenko
2021-07-13 13:35     [dpdk-dev] [PATCH 00/10] new features for ipsec and security libraries Radu Nicolau
2021-07-13 13:35     ` [dpdk-dev] [PATCH 01/10] security: add support for TSO on IPsec session Radu Nicolau
2021-07-27 18:34  3%   ` [dpdk-dev] [EXT] " Akhil Goyal
2021-07-29  8:37  0%     ` Nicolau, Radu
2021-07-31 17:50  0%     ` Akhil Goyal
2021-07-13 20:12     [dpdk-dev] [PATCH] eal: fix argument to rte_bsf32_safe Stephen Hemminger
2021-07-19 17:15  0% ` Tyler Retzlaff
2021-07-19 22:00  3%   ` Stephen Hemminger
2021-07-20 13:26  0%     ` Thomas Monjalon
2021-07-23  0:52  8% ` [dpdk-dev] [PATCH v2] " Stephen Hemminger
2021-07-23 15:45  8% ` [dpdk-dev] [PATCH v3] " Stephen Hemminger
2021-07-24  7:58  0%   ` Thomas Monjalon
2021-07-24 23:50  0%     ` Stephen Hemminger
2021-07-15 11:33     [dpdk-dev] [PATCH v3] app/testpmd: fix testpmd doesn't show RSS hash offload Jie Wang
2021-07-15 11:57     ` [dpdk-dev] [PATCH v4] " Jie Wang
2021-07-15  4:53       ` Li, Xiaoyun
2021-07-16  8:30         ` Li, Xiaoyun
2021-07-16  8:52           ` [dpdk-dev] [dpdk-stable] " Ferruh Yigit
     [not found]             ` <DM8PR11MB5639C757A790F65CBFB647C2D1E19@DM8PR11MB5639.namprd11.prod.outlook.com>
2021-07-19 16:18  0%           ` Ferruh Yigit
2021-07-22 11:03  0%             ` Andrew Rybchenko
2021-08-09  8:53  0%               ` Ferruh Yigit
2021-07-16 14:51  5% [dpdk-dev] Minutes of Technical Board Meeting 2021-06-02 Stephen Hemminger
2021-07-16 17:02     [dpdk-dev] [PATCH] eventdev: configure the Rx event buffer size Ganapati Kundapura
2021-07-19  6:43     ` Jerin Jacob
2021-07-19 15:26  3%   ` Kundapura, Ganapati
2021-07-19 16:13  3%     ` Jerin Jacob
2021-07-20  5:46  3% [dpdk-dev] [PATCH 0/2] Improvements to rte_security Anoob Joseph
2021-07-22 20:22  3% [dpdk-dev] DPDK Release Status Meeting 22/07/2021 Thomas Monjalon
2021-07-23  7:02     [dpdk-dev] RFC: Enahancements to Rx adapter for DPDK 21.11 Kundapura, Ganapati
2021-07-26 13:04     ` Kundapura, Ganapati
2021-07-28  6:08  4%   ` Jerin Jacob
2021-07-28  6:23  4%     ` Kundapura, Ganapati
2021-07-30 11:17  0%       ` Jerin Jacob
2021-07-27  3:41     [dpdk-dev] [RFC] ethdev: change queue release callback Xueming Li
2021-07-28  7:40     ` Andrew Rybchenko
2021-08-09 14:39       ` Singh, Aman Deep
2021-08-09 15:31  4%     ` Ferruh Yigit
2021-08-10  8:03  3%       ` Xueming(Steven) Li
2021-08-10  8:54  0%         ` Ferruh Yigit
2021-08-10  9:07  0%           ` Xueming(Steven) Li
2021-08-11 11:57  0%             ` Ferruh Yigit
2021-08-11 12:13  0%               ` Xueming(Steven) Li
2021-08-12 14:29  0%                 ` Xueming(Steven) Li
2021-07-27  3:42     [dpdk-dev] [RFC] ethdev: introduce shared Rx queue Xueming Li
2021-08-11 14:04     ` [dpdk-dev] [PATCH v2 01/15] " Xueming Li
2021-08-17  9:33       ` Jerin Jacob
2021-08-17 11:31         ` Xueming(Steven) Li
2021-08-17 15:11           ` Jerin Jacob
2021-08-18 11:14             ` Xueming(Steven) Li
2021-08-19  5:26               ` Jerin Jacob
2021-08-19 12:09                 ` Xueming(Steven) Li
2021-08-26 11:58  4%               ` Jerin Jacob
2021-08-28 14:16  0%                 ` Xueming(Steven) Li
2021-08-30  9:31  3%                   ` Jerin Jacob
2021-08-30 10:13  0%                     ` Xueming(Steven) Li
2021-07-27 17:36     [dpdk-dev] [PATCH] doc: announce security API changes for Inline IPsec Nithin Dabilpuram
2021-07-30 22:16  3% ` Thomas Monjalon
2021-08-03  2:11  3%   ` Nithin Dabilpuram
2021-07-30 21:44     [dpdk-dev] [PATCH] doc: abstract the behaviour of rte_ctrl_thread_create Honnappa Nagarahalli
2021-08-07 14:55     ` Thomas Monjalon
2021-08-09 13:18       ` Honnappa Nagarahalli
2021-08-23  9:40  3%     ` Olivier Matz
2021-08-23 21:18  0%       ` Honnappa Nagarahalli
2021-07-31 18:13  8% [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements Akhil Goyal
2021-07-31 18:13  3% ` [dpdk-dev] [PATCH 1/4] cryptodev: remove LIST_END enumerators Akhil Goyal
2021-07-31 18:13  3% ` [dpdk-dev] [PATCH 4/4] security: add reserved bitfields Akhil Goyal
2021-07-31 18:17  4% ` [dpdk-dev] [PATCH 0/4] cryptodev and security ABI improvements Akhil Goyal
2021-08-01 10:22     [dpdk-dev] [PATCH 1/2] ethdev: announce flow API action PORT_ID changes Andrew Rybchenko
2021-08-01 10:57     ` Eli Britstein
2021-08-01 12:03  3%   ` Andrew Rybchenko
2021-08-01 12:23  0%     ` Ori Kam
2021-08-01 12:43  0%       ` Andrew Rybchenko
2021-08-01 12:56  0%         ` Ori Kam
2021-08-01 13:23  0%           ` Andrew Rybchenko
2021-08-01 16:13  0%             ` Ori Kam
2021-08-01 20:09  0%               ` Andrew Rybchenko
2021-08-02  7:28  0%                 ` Ori Kam
2021-08-02 10:11  0%                   ` Andrew Rybchenko
2021-08-02 12:33  3% [dpdk-dev] [dpdk-announce] URGENT: review of deprecation notices before closing 21.08 Thomas Monjalon
2021-08-02 16:03 10% [dpdk-dev] [PATCH] doc: announce: make rte intr handle internal Harman Kalra
2021-08-02 19:20  0% ` Andrew Rybchenko
2021-08-03  2:37  0% ` Xia, Chenbo
2021-08-03  4:05  0%   ` Jerin Jacob
2021-08-04 14:22  0%     ` Thomas Monjalon
2021-08-02 17:32     [dpdk-dev] [PATCH] doc: announce changes to eventdev library pbhagavatula
2021-08-02 21:09     ` [dpdk-dev] [PATCH v2] " pbhagavatula
2021-08-03  4:12  3%   ` Jerin Jacob
2021-08-03  8:32  0%     ` Mattias Rönnblom
2021-08-04  5:57  0%     ` Jayatheerthan, Jay
2021-08-04  6:06  0%     ` Gujjar, Abhinandan S
2021-08-05 14:22  0%     ` Thomas Monjalon
2021-08-03 11:44  3% [dpdk-dev] [PATCH] doc: announce cryptodev-PMD interface as internal Akhil Goyal
2021-08-03 19:25  0% ` Ajit Khaparde
2021-08-04  6:44  0%   ` Matan Azrad
2021-08-04  8:44  0%     ` Hemant Agrawal
2021-08-04 14:35  0%       ` Thomas Monjalon
2021-08-03 11:55     [dpdk-dev] [PATCH] doc: announce restructuring of crypto session structs Akhil Goyal
2021-08-03 12:01     ` [dpdk-dev] [PATCH v2] " Akhil Goyal
2021-08-05 13:57       ` Zhang, Roy Fan
2021-08-05 14:09         ` Akhil Goyal
2021-08-05 14:53           ` Zhang, Roy Fan
2021-08-05 15:03  3%         ` Akhil Goyal
2021-08-05 21:57  7% [dpdk-dev] [PATCH v1] doc: update release notes for 21.08 John McNamara
2021-08-08 17:46  3% [dpdk-dev] [dpdk-announce] DPDK 21.08 released Thomas Monjalon
2021-08-08 17:50  0% ` St Leger, Jim
2021-08-08 19:26 11% [dpdk-dev] [PATCH] version: 21.11-rc0 Thomas Monjalon
2021-08-12 14:36  0% ` Ferruh Yigit
2021-08-12 18:57  0%   ` [dpdk-dev] [EXT] " Akhil Goyal
2021-08-17  6:34  4% ` [dpdk-dev] " David Marchand
2021-08-17 12:04  4%   ` [dpdk-dev] [dpdk-ci] " Lincoln Lavoie
2021-08-17 15:19  0%     ` David Marchand
2021-08-17 16:02  0%       ` Ali Alnubani
2021-08-24  7:58  3%     ` David Marchand
2021-08-24 12:19  3%       ` Lincoln Lavoie
2021-08-10 18:27  5% [dpdk-dev] [Bug 788] i40e: 16BYTE_RX_DESC build broken on FreeBSD-13 bugzilla
2021-08-11 20:46     [dpdk-dev] [PATCHv2] include: fix sys/queue.h William Tu
2021-08-12 20:05     ` [dpdk-dev] [PATCHv3] " William Tu
2021-08-12 21:58  3%   ` Dmitry Kozlyuk
2021-08-13  1:02  1%   ` [dpdk-dev] [PATCHv4] eal: remove sys/queue.h from public headers William Tu
2021-08-13  1:11  0%     ` Stephen Hemminger
2021-08-13  1:36  0%       ` William Tu
2021-08-13  3:36  1%     ` [dpdk-dev] [PATCHv5] " William Tu
2021-08-13 18:59  0%       ` Dmitry Kozlyuk
2021-08-14  2:51  1%       ` [dpdk-dev] [PATCH v6] " William Tu
2021-08-18 23:26  1%         ` [dpdk-dev] [PATCH v7] " William Tu
2021-08-23 13:03  1%           ` [dpdk-dev] [PATCH v8] " William Tu
2021-08-24 16:21  1%             ` [dpdk-dev] [PATCH v9] " William Tu
2021-08-13 16:51     [dpdk-dev] [PATCH v1 0/6] bbdev update related to CRC usage Nicolas Chautru
2021-08-13 16:51  4% ` [dpdk-dev] [PATCH v1 1/6] bbdev: add capability for CRC16 check Nicolas Chautru
     [not found]     <e600e472-2b39-7f07-d20e-9d6fe8e6d515@intel.com>
2021-08-16  9:34  3% ` [dpdk-dev] Minutes of Technical Board Meeting, 2021-08-11 Ferruh Yigit
2021-08-17 17:38     [dpdk-dev] [PATCH v5] app/testpmd: fix testpmd doesn't show RSS hash offload Jie Wang
2021-08-24 18:19     ` [dpdk-dev] [PATCH v6 0/2] testpmd shows incorrect rx_offload configuration Jie Wang
2021-08-24 18:19       ` [dpdk-dev] [PATCH v6 1/2] ethdev: add an API to get device configuration info Jie Wang
2021-08-25 20:07  3%     ` Ferruh Yigit
2021-08-26  6:00  0%       ` Ajit Khaparde
2021-08-18  9:07     [dpdk-dev] [PATCH 0/4] net/mlx5: implicit mempool registration Dmitry Kozlyuk
2021-08-18  9:07  4% ` [dpdk-dev] [PATCH 2/4] mempool: add non-IO flag Dmitry Kozlyuk
2021-08-18 13:44     [dpdk-dev] [PATCH v3 6/6] Allow choice between internal EAL thread API and external lib Narcisa Ana Maria Vasile
2021-08-18 21:19  4% ` [dpdk-dev] [PATCH v4 0/6] Enable the internal EAL thread API Narcisa Ana Maria Vasile
2021-08-18 21:19  4%   ` [dpdk-dev] [PATCH v4 2/6] eal: add function for control thread creation Narcisa Ana Maria Vasile
2021-08-19 21:10     [dpdk-dev] [PATCH v2 0/6] bbdev update related to CRC usage Nicolas Chautru
2021-08-19 21:10  4% ` [dpdk-dev] [PATCH v2 1/6] bbdev: add capability for CRC16 check Nicolas Chautru
2021-08-20 16:28  3% [dpdk-dev] [RFC 0/7] hide eth dev related structures Konstantin Ananyev
2021-08-20 16:28  2% ` [dpdk-dev] [RFC 1/7] eth: move ethdev 'burst' API into separate structure Konstantin Ananyev
2021-08-26 12:37  3% ` [dpdk-dev] [RFC 0/7] hide eth dev related structures Jerin Jacob
2021-08-21 14:07  0% [dpdk-dev] [PATCH 21.11 v2 0/3] octeontx build only on 64-bit Linux Pavan Nikhilesh Bhagavatula
2021-08-23 19:40     [dpdk-dev] [RFC 01/15] eventdev: make driver interface as internal pbhagavatula
2021-08-23 19:40  2% ` [dpdk-dev] [RFC 04/15] eventdev: move inline APIs into separate structure pbhagavatula
2021-08-23 19:40     ` [dpdk-dev] [RFC 11/15] eventdev: reserve fields in timer object pbhagavatula
2021-08-24 15:10  3%   ` Stephen Hemminger
2021-08-25 11:27     [dpdk-dev] [PATCH] doc: announce change in dma mapping/unmapping Xuan Ding
2021-08-25 11:47     ` Burakov, Anatoly
2021-08-26  9:29       ` Ferruh Yigit
2021-08-26  9:46  3%     ` Burakov, Anatoly
2021-08-26 10:09  3%       ` Bruce Richardson
2021-08-26 10:14  0%         ` Burakov, Anatoly
2021-08-31 13:42  0%           ` Ding, Xuan
2021-08-26 10:35 15% [dpdk-dev] [PATCH] doc: announce library refactor for ABI improvement Ferruh Yigit
2021-08-26 10:46  4% ` [dpdk-dev] [EXT] " Akhil Goyal
2021-08-26 10:47  4%   ` Jerin Jacob
2021-08-26 11:04  4%   ` Bruce Richardson
2021-08-26 15:44  4%     ` Andrew Rybchenko
2021-08-26 14:57  4% [dpdk-dev] [RFC 0/7] make rte_intr_handle internal Harman Kalra
2021-08-26 14:57  1% ` [dpdk-dev] [RFC 3/7] eal/interrupts: avoid direct access to interrupt handle Harman Kalra
2021-08-27  1:47  4% [dpdk-dev] [Bug 797] [dpdk-21.11]Segmentation fault when start txonly packet forward after set txpkts=40, 64 and txsplit=rand bugzilla
2021-08-27  6:56     [dpdk-dev] [PATCH 00/38] net/sfc: support port representors Andrew Rybchenko
2021-08-27  6:56  2% ` [dpdk-dev] [PATCH 01/38] common/sfc_efx/base: update MCDI headers Andrew Rybchenko
2021-08-29  8:48  4% [dpdk-dev] Marvell v21.11 Roadmap Jerin Jacob Kollanukkaran
2021-08-29 12:51     [dpdk-dev] [PATCH 0/8] cryptodev: hide internal strutures Akhil Goyal
2021-08-29 12:51  3% ` [dpdk-dev] [PATCH 2/8] cryptodev: move inline APIs into separate structure Akhil Goyal
2021-08-30 10:25     [dpdk-dev] [RFC 01/15] eventdev: make driver interface as internal Mattias Rönnblom
2021-08-30 16:00  2% ` [dpdk-dev] [RFC] eventdev: uninline inline API functions Mattias Rönnblom
2021-08-31 12:28  0%   ` Jerin Jacob
2021-08-31 12:34  0%     ` Mattias Rönnblom
2021-08-30 19:44     [dpdk-dev] [PATCH] eventdev: update crypto adapter metadata structures Shijith Thotton
2021-08-30 19:59     ` [dpdk-dev] [PATCH v2] " Shijith Thotton
2021-08-31  6:08  3%   ` Akhil Goyal
2021-08-31  6:51  0%     ` Shijith Thotton
2021-08-31  7:56  6%   ` [dpdk-dev] [PATCH v3] " Shijith Thotton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).